diff --git a/afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_content_list.json b/afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fe2f7a4be9210b289b2a5019e463022e3eb788b5 --- /dev/null +++ b/afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:03efd4d1344c9ef35f6ab16a180265d1e40a368a64500cdf8f572c580625fd39 +size 87474 diff --git a/afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_model.json b/afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c808d2bfb2068a3026aee5cea97bf99107f00e29 --- /dev/null +++ b/afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5641339625da579d6df610ee67821096412efe9e2c27095953ac8dede93f310a +size 115742 diff --git a/afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_origin.pdf b/afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..46ac6e2f1b2e6ca17b5e70d2d5fba91e68117700 --- /dev/null +++ b/afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7152206aed08effcf231b2d26c0e6c33a4dde90ef5860c4775209978661e15e7 +size 1477286 diff --git a/afinegrainedanalysisondistributionshift/full.md b/afinegrainedanalysisondistributionshift/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b8989b305ce5d6b36af3d5cf7f9251c1f6240a7c --- /dev/null +++ b/afinegrainedanalysisondistributionshift/full.md @@ -0,0 +1,326 @@ +# A FINE-GRAINED ANALYSIS ON DISTRIBUTION SHIFT + +Olivia Wiles Sven Gowal Florian Stimberg Sylvestre-Alvise Rebuffi Ira Ktena Krishnamurthy (Dj) Dvijotham Taylan Cemgil + +DeepMind, London, UK + +{oawiles,sgowal,stimberg,sylvestre,iraktena,taylancemgil}@deepmind.com dvij@google.com + +# ABSTRACT + +Robustness to distribution shifts is critical for deploying machine learning models in the real world. Despite this necessity, there has been little work in defining the underlying mechanisms that cause these shifts and evaluating the robustness of algorithms across multiple, different distribution shifts. To this end, we introduce a framework that enables fine-grained analysis of various distribution shifts. We provide a holistic analysis of current state-of-the-art methods by evaluating 19 distinct methods grouped into five categories across both synthetic and real-world datasets. Overall, we train more than 85K models. Our experimental framework can be easily extended to include new methods, shifts, and datasets. We find, unlike previous work (Gulrajani & Lopez-Paz, 2021), that progress has been made over a standard ERM baseline; in particular, pretraining and augmentations (learned or heuristic) offer large gains in many cases. However, the best methods are not consistent over different datasets and shifts. Code is available at github.com/deepmind/distributionhift_framework. + +# 1 INTRODUCTION + +If machine learning models are to be ubiquitous in critical applications such as driverless cars (Janai et al., 2020), medical imaging (Erickson et al., 2017), and science (Jumper et al., 2021), it is pivotal to build models that are robust to distribution shifts. Otherwise, models may fail surprisingly in ways that derail trust in the system. For example, Koh et al. (2020); Perone et al. (2019); AlBadawy et al. (2018); Heaven (2020); Castro et al. (2020) find that a model trained on one set of hospitals may not generalise to the imaging conditions of another; Alcorn et al. (2019); Dai & Van Gool (2018) find that a model for driverless cars may not generalise to new lighting conditions or object poses; and Buolamwini & Gebru (2018) find that a model may perform worse on subsets of the distribution, such as different ethnicities, if the training set has an imbalanced distribution. Thus, it is important to understand when we expect a model to generalise and when we do not. This would allow a practitioner to have confidence in the system (e.g. if a model is demonstrated to be robust to the imaging conditions of different hospitals, then it can be deployed in new hospitals with confidence). + +While domain generalization is a well studied area, Gulrajani & Lopez-Paz (2021); Schott et al. (2021) have cast doubt on the efficacy of existing methods, raising the question: has any progress been made in domain generalization over a standard expectation risk minimization (ERM) algorithm? Despite these discouraging results, there are many examples that machine learning models do generalise across datasets with different distributions. For example, CLIP (Radford et al., 2021), with well engineered prompts, generalizes to many standard image datasets. Taori et al. (2020) found that models trained on one image dataset generalise to another, albeit with some drop in performance; in particular, higher performing models generalise better. However, there is little understanding and experimentation on when and why models generalise, especially in realistic settings inspired by real-world applications. This begs the following question: + +Can we define the important distribution shifts to be robust to and then systematically evaluate the robustness of different methods? + +To answer the above question, we present a grounded understanding of robustness to distribution shifts. We draw inspiration from disentanglement literature (see section 6), which aims to separate images into an independent set of factors of variation (or attributes). In brief, we assume the data + +is composed of some (possibly extremely large) set of attributes. We expect models, having seen some distribution of values for an attribute, to be able to learn invariance to that attribute and so to generalise to unseen examples of the attribute and different distributions over that attribute. Using a simple example to clarify the setup, assume our data has two attributes (shape and color) among others. Given data with some distribution over the set of possible colors (e.g. red and blue) and the task of predicting shape (e.g. circle or square), we want our model to generalise to unseen colors (e.g. green) or a different distribution of colors (e.g. there are very few red circles in the training set, but the samples at evaluation are uniformly sampled from the set of possible colors and shapes). + +Using this framework, we evaluate models across three distribution shifts: spurious correlation, low-data drift, and unseen data shift (illustrated in figure 1) and two additional conditions (label noise and dataset size). We choose these settings as they arise in the real world and harm generalization performance. Moreover, in our framework, these distribution shifts are the fundamental blocks of building more complex distribution shifts. We additionally evaluate models when there is varying amounts of label noise (as inspired by noise arising from human raters) and when the total size of the train set varies (to understand how models perform as the number of training examples changes). The unique ability of our framework to evaluate fine-grained performance of models across different distribution shifts and under different conditions is of critical importance to analyze methods under a variety of real-world settings. This work makes the following contributions: + +- We propose a framework to define when and why we expect methods to generalise. We use this framework to define three, real world inspired distribution shifts. We then use this framework to create a systematic evaluation setup across real and synthetic datasets for different distribution shifts. Our evaluation framework is easily extendable to new distribution shifts, datasets, or methods to be evaluated. +- We evaluate and compare 19 different methods (training more than $85\mathrm{K}$ models) in these settings. These methods span the following 5 common approaches: architecture choice, data augmentation, domain generalization, adaptive algorithms, and representation learning. This allows for a direct comparison across different areas in machine learning. +- We find that simple techniques, such as data augmentation and pretraining are often effective and that domain generalization algorithms do work for certain datasets and distribution shifts. However, there is no easy way to select the best approach a-priori and results are inconsistent over different datasets and attributes, demonstrating there is still much work to be done to improve robustness in real-world settings. + +# 2 FRAMEWORK TO EVALUATE GENERALIZATION + +In this section we introduce our robustness framework for characterizing distribution shifts in a principled manner. We then relate three common, real world inspired distribution shifts. + +# 2.1 LATENT FACTORSATION + +We assume a joint distribution $p$ of inputs $x$ and corresponding attributes $y^{1},y^{2},\ldots ,y^{K}$ (denoted as $y^{1:K}$ ) with $y^{k}\in \mathbb{A}^{k}$ where $\mathbb{A}^k$ is a finite set. One of these $K$ attributes is a label of interest, denoted as $y^{l}$ (in a mammogram, the label could be cancer/benign and a nuisance attribute $y^{i}$ with $i\neq l$ could be the identity of the hospital where the mammogram was taken). Our aim is to build a classifier $f$ that minimizes the risk $R$ . However, in real-world applications, we only have access to a finite set of inputs and attributes of size $n$ . Hence, we minimize the empirical risk $\hat{R}$ instead: + +$$ +R(f) = \mathbb{E}_{(\boldsymbol {x},y^{l})\sim p}\left[\mathcal{L}(y^{l},f(\boldsymbol {x}))\right] \qquad \hat{R} (f;p) = \frac{1}{n}\sum_{\{(y_{i}^{l},\boldsymbol{x}_{i})\sim p\}_{i = 1}^{n}}\mathcal{L}(y_{i}^{l},f(\boldsymbol{x}_{i})). +$$ + +where $\mathcal{L}$ is a suitable loss function. Here, all nuisance attributes $y^{k}$ with $k\neq l$ are ignored and we work with samples obtained from the marginal $p(y^l,\boldsymbol {x})$ . In practice, however, due to selection bias or other confounding factors in data collection, we are only able to train and test our models on data collected from two related but distinct distributions: $p_{\mathrm{train}},p_{\mathrm{test}}$ . For example, $p_{\mathrm{train}}$ and $p_{\mathrm{test}}$ may be concentrated on different subsets of hospitals and this discrepancy may result in a distribution shift; for example, hospitals may use different equipment, leading to different staining on their cell + +![](images/2594b58170437add2a661ce31082fba6c561644da5bdd5d0455f852f32c961fa.jpg) +(a) $p_{\mathrm{train}}$ : SC. + +![](images/f00e0c6c8e14e1483122af43803afac95a8493721948c0b9556f3995f7051b5b.jpg) +(b) $p_{\mathrm{train}}$ :LDD. +Figure 1: Visualization of the joint distribution for the different shifts we consider on the DSPRITES example. The lighter the color, the more likely the given sample. figure 1a-1c visualise different shifts over $p_{\mathrm{train}}(y^l,y^a)$ discussed in 2.2: spurious correlation (SC), low-data drift (LDD), and unseen data shift (UDS). figure 1d visualises the test set, where the attributes are uniformly distributed. + +![](images/1e5e178e4c29586319b12b478ca0d2ef0a342153d88fbaa28167ace0c4a9df78.jpg) +(c) $p_{\mathrm{train}}$ : UDS. + +![](images/b40b57b8551ec915698a947f9e2982c0f87e5667d3dded6176a2cb9fecda55fa.jpg) +(d) $p_{\mathrm{test}}:y^l,y^a$ areIID. + +images. While we train $f$ on data from $p_{\mathrm{train}}$ by minimizing $\hat{R}(f; p_{\mathrm{train}})$ , we aim to learn a model that generalises well to data from $p_{\mathrm{test}}$ ; that is, it should achieve a small $\hat{R}(f; p_{\mathrm{test}})$ . + +While generalization in the above sense is desirable for machine learning models, it is not clear why a model $f$ trained on data from $p_{\mathrm{train}}$ should generalise to $p_{\mathrm{test}}$ . It is worth noting that while $p_{\mathrm{train}}$ and $p_{\mathrm{test}}$ can be different, they are both related to the true distribution $p$ . We take inspiration from disentanglement literature to express this relationship. In particular, that we can view data as being decomposed into an underlying set of factors of variations. We formalise various distribution shifts using a latent variable model for the true data generation process: + +$$ +z \sim p (z) \quad y ^ {i} \sim p \left(y ^ {i} \mid z\right) \quad i = 1 \dots K \quad \boldsymbol {x} \sim p (\boldsymbol {x} | z) \tag {1} +$$ + +where $z$ denotes latent factors. By a simple refactorization, we can write + +$$ +p (y ^ {1: K}, \boldsymbol {x}) = p (y ^ {1: K}) \int p (\boldsymbol {x} | z) p (z | y ^ {1: K}) d z = p (y ^ {1: K}) p (\boldsymbol {x} | y ^ {1: K}). +$$ + +Thus, the true distribution can be expressed as the product of the marginal distribution of the attributes with a conditional generative model. We assume that distribution shifts arise when a new marginal distribution for the attributes is chosen, such as $p(y^{1:K}) \neq p_{\mathrm{train}}(y^{1:K}) \neq p_{\mathrm{test}}(y^{1:K})$ , but otherwise the conditional generative model is shared across all distributions, i.e., we have $p_{\mathrm{test}}(y^{1:K}, \boldsymbol{x}) = p_{\mathrm{test}}(y^{1:K}) \int p(\boldsymbol{x} | z) p(z | y^{1:K}) dz$ , and similarly for $p_{\mathrm{train}}$ . + +To provide more context, as a running example, we use the color DSPRITES dataset (Matthey et al., 2017); where in our notation $y^{1}$ defines the color with $\mathbb{A}^1 = \{\text{red, green, blue}\}$ , and $y^{2}$ defines the shape with $\mathbb{A}^2 = \{\text{ellipse, heart, square}\}$ . We can imagine that a data collector (intentionally or implicitly) selects some marginal distribution over attributes $p_{\text{train}}(y^{1:K})$ when training; for example they select mostly blue ellipses and red hearts. This induces a new joint distribution over latent factors and attributes: $p_{\text{train}}(z, y^{1:K}) = p(z|y^{1:K})p_{\text{train}}(y^{1:K})$ . Consequently, during training, we get images with a different joint distribution $p_{\text{train}}(\boldsymbol{x}, y^{1:K}) = \int p(\boldsymbol{x}|z)p_{\text{train}}(z, y^{1:K})$ . This similarly applies when collecting data for the test distribution. We focus on common cases of distribution shifts visualized in figure 1; we discuss these in more detail in section 2.2. + +The goal of enforcing robustness to distribution shifts is to maintain performance when the data generating distribution $p_{\mathrm{train}}$ changes. In other words, we would like to minimize risk on $p$ , $p_{\mathrm{test}}$ given only access to $p_{\mathrm{train}}$ . We can achieve robustness in the following ways: + +- Weighted resampling. We can resample the training set using importance weights $W(y^{1:K}) = p(y^{1:K}) / p_{\mathrm{train}}(y^{1:K})$ . This means that given the attributes, the $i$ -th data point $(y_i^{1:K}, x_i)$ in the training set is used with probability $W(y_i^{1:K}) / \sum_{i' = 1}^n W(y_{i'}^{1:K})$ rather than $1/n$ . We refer to this empirical distribution as $p_{\mathrm{reweight}}$ . This procedure requires access to the true distribution of attributes $p(y^{1:K})$ , so to avoid bias and improve fairness, it is often assumed that all combinations of attributes happen uniformly at random. +- Data Augmentation: Alternatively, we can learn a generative model $\hat{p}(\boldsymbol{x}|\boldsymbol{y}^{1:K})$ from the training data that aims to approximate $\int p(\boldsymbol{x}|\boldsymbol{z})p(\boldsymbol{z}|\boldsymbol{y}^{1:K})dz$ , as the true conditional + +![](images/597f2c8e67a970bcea0158425f67553693459570d4bc1d613b201420b189a661.jpg) +DSPRITES. + +![](images/2d78f90d88734bd5d6ae78296cc039b59770f380c70591b19132b438fdb2130b.jpg) +MPI3D. + +![](images/089007bb75ac264052508561309bca422eda7c1331d8f340343f8606db2cc0d4.jpg) +SHAPES3D. + +![](images/902d298ea70978f68b0d8f595ecd34d2390b6370792acc4e2a1c96293b97fff2.jpg) +SMALLNORB. + +![](images/cf7290052535de3e5425ab9c48e02a91fd25c79124793c9552aa730a7cb51617.jpg) +CAMELYON17. + +![](images/1e81cbee8a4cd8f93b2ba79002db8c71c93b7bc8258b2e86d3cbbf3026d5ea0c.jpg) +IWILDCAM. +Figure 2: Dataset samples. Each row fixes an attribute (e.g. color for DSPRITES, MPI3D, SHAPES3D; azimuth for SMALLNORB; hospital for CAMELYON17; and location for IWILDCAM). + +generator is by our assumption the same over all (e.g. train and test) distributions. If such a conditional generative model can be learned, we can sample new synthetic data at training time (e.g. according to the true distribution $p(y^{1:K})$ ) to correct for the distribution shift. More precisely, we can generate data from the augmented distribution $p_{\mathrm{aug}} = (1 - \alpha)p_{\mathrm{reweight}} + \alpha \hat{p}(\boldsymbol{x}|y^{1:K})p(y^{1:K})$ and train a supervised classifier on this augmented dataset. Here, $\alpha \in [0,1]$ is the percentage of synthetic data used for training. + +- Representation Learning: An alternative factorization of a data generating distribution (e.g. train) is $p_{\mathrm{train}}(y^{1:K}, \boldsymbol{x}) = \int p(z|\boldsymbol{x}) p_{\mathrm{train}}(y^{1:K}|z) dz$ . We can learn an unsupervised representation that approximates $p(z|\boldsymbol{x})$ using the training data only, and attach a classifier to learn a task specific head that approximates $p_{\mathrm{train}}(y^l | z)$ . Again, by our assumption $p(z|\boldsymbol{x}) \propto p(\boldsymbol{x}|z)p(z)$ . Given a good guess of the true prior, the learned representation would not be impacted by the specific attribute distribution and so generalise to $p_{\mathrm{test}}, p$ . + +# 2.2 DISTRIBUTION SHIFTS + +While distribution shifts can happen in a continuum, we consider three types of shifts, inspired by real-world challenges. We discuss these shifts and two additional, real-world inspired conditions. + +Test distribution $p_{\mathrm{test}}$ . We assume that the attributes are distributed uniformly: $p_{\mathrm{test}}(y^{1:K}) = 1 / \prod_i |\mathbb{A}^i|$ . This is desirable, as all attributes are represented and a-priori independent. + +Shift 1: Spurious correlation - Attributes are correlated under $p_{\mathrm{train}}$ but not $p_{\mathrm{test}}$ . Spurious correlation arises in the wild for a number of reasons including capture bias, environmental factors, and geographical bias (Beery et al., 2018; Torralba & Efros, 2011). These spurious correlations lead to surprising results and poor generalization. Therefore, it is important to be able to build models that are robust to such challenges. In our framework, spurious correlation arises when two attributes $y^{a}$ , $y^{b}$ are correlated at training time, but this is not true of $p_{\mathrm{test}}$ , for which attributes are independent: $p_{\mathrm{train}}(y^{a}|y^{1}\dots y^{b}\dots y^{K}) > p_{\mathrm{train}}(y^{a}|y^{1}\dots y^{b - 1},y^{b + 1}\dots y^{K})$ . This is especially problematic when one attribute $y^{b}$ is $y^{l}$ , the label. Using the running DSPRITES example, shape and color may be correlated and the model may find it easier to predict color. If color is the label, the model will generalise well. However, if the aim is to predict shape, the model's reliance on color will lead to poor generalization. + +Shift 2: Low-data drift - Attribute values are unevenly distributed under $p_{\text{train}}$ but not under $p_{\text{test}}$ . Low-data drift arises in the wild (e.g. in (Buolamwini & Gebru, 2018) for different ethnicities) when data has not been collected uniformly across different attributes. When deploying models in the wild, it is important to be able to reason and have confidence that the final predictions will + +![](images/cb4d0e03647336ee9a80220effc247ee1ea682dd5cc1dbc84d05051274b6fa23.jpg) +Figure 3: Spurious Correlation. We use all correlated samples and vary the number of samples $N$ from the true, uncorrelated distribution. We plot the percentage change over the baseline ResNet, averaged over all seeds and datasets. Blue is better, red worse. CYCLEGAN performs consistently best while ImageNet augmentation and pretraining on ImageNet also consistently boosts performance. + +![](images/f9a7c82df5bdf15d789e2c97bb65a208b8616ed47ebcface9a067df052dbaef5.jpg) +Figure 4: Low-data drift. We use all samples from the high data regions and vary the number of samples $N$ from the low-data region. We plot the percentage change over the baseline ResNet, averaged over all seeds and datasets. Blue is better, red worse. Pretraining on ImageNet performs consistently best, while CYCLEGAN, most domain generalization methods and ImageNet augmentation also provide some boost in performance. + +be consistent and fair across different attributes. In the framework above, low-data shifts arise when certain values in the set $\mathbb{A}^a$ of an attribute $y^{a}$ are sampled with a much smaller probability than in $p_{\mathrm{test}}$ : $p_{\mathrm{train}}(y^a = v)\ll p_{\mathrm{test}}(y^a = v)$ . Using the DSPRITES example, only a handful of red shapes may be seen at training time, yet in $p_{\mathrm{test}}$ all colors are sampled with equal probability. + +Shift 3: Unseen data shift - Some attribute values are unseen under $p_{\text{train}}$ but are under $p_{\text{test}}$ . This is a special case of shift 2: low-data drift which we make explicit due to its important real world applications. Unseen data shift arises when a model, trained in one setting is expected to work in another, disjoint setting. For example: a model trained to classify animals on images at certain times of day should generalise to other times of day. In our framework, unseen data shift arises when some values in the set $\mathbb{A}^a$ of an attribute $y^a$ are unseen in $p_{\text{train}}$ but are in $p_{\text{test}}$ : + +$$ +p _ {\text {t r a i n}} \left(y ^ {a} = v\right) = 0 \quad p _ {\text {t e s t}} \left(y ^ {a} = v\right) > 0 \quad | \{v | p _ {\text {t r a i n}} \left(y ^ {a} = v\right) \} | > 1 \tag {2} +$$ + +This is a stronger constraint than in standard out-of-distribution generalization (see section 6), as multiple values for $\mathbb{A}^a$ must be seen under $p_{\mathrm{train}}$ , which allows the model to learn invariance to $y^{a}$ . In the DSPRITES example, the color red may be unseen at train time but all colors are in $p_{\mathrm{test}}$ . + +Discussion. We choose these sets of shifts as they are the building blocks of more complex distribution shifts. Consider the simplest case of two attributes: the label and a nuisance attribute. If we consider the marginal distribution of the label, it decomposes into two terms: the conditional probability and the probability of a given attribute value: $p(y^l) = \sum_{y^a} p(y^l | y^a) p(y^a)$ . The three shifts we consider control these terms independently: unseen data shift and low-data drift control $p(y^a)$ whereas spurious correlation controls $p(y^l | y^a)$ . The composition of these terms describes any distribution shift for these two variables. + +# 2.3 CONDITIONS + +Label noise. We investigate the change in performance due to noisy information. This can arise when there are disagreements and errors among the labellers (e.g. in medical imaging (Castro et al., 2020)). We model this as an observed attribute (e.g. the label) being corrupted by noise. $\hat{y}^i\sim c(y^i)$ where $y^{i}\in \mathbb{A}^{i}$ is the true label, $\hat{y}^i\in \mathbb{A}^i$ the corrupted, observed one, and $c$ the corrupting function. + +Dataset size. We investigate how performance changes with the size of the training dataset. This setting arises when it is unrealistic or expensive to collect additional data (e.g. in medical imaging or in camera trap imagery). Therefore, it is important to understand how performance degrades given fewer total samples. We do this by limiting the total number of samples from $p_{\text{train}}$ . + +# 3 MODELS EVALUATED + +We evaluate 19 algorithms to cover a broad range of approaches that can be used to improve model robustness to distribution shifts and demonstrate how they relate to the three ways to achieve robustness, outlined in section 2. We believe this is the first paper to comprehensively evaluate a large set of different approaches in a variety of settings. These algorithms cover the following areas: architecture choice, data augmentation, domain adaptation, adaptive approaches and representation learning. Further discussion on how these models relate to our robustness framework is in appendix E. + +Architecture choice. We evaluate the following standard vision models: ResNet18, ResNet50, ResNet101 (He et al., 2016), ViT (Dosovitskiy et al., 2021), and an MLP (Vapnik, 1992). We use weighted resampling $p_{\mathrm{reweight}}$ to oversample from the parts of the distribution that have a lower probability of being sampled from under $p_{\mathrm{train}}$ . Performance depends on how robust the learned representation is to distribution shift. + +Heuristic data augmentation. These approaches attempt to approximate the true underlying generative model $p(x|y^{1:K})$ in order to improve robustness. We analyze the following augmentation methods: standard ImageNet augmentation (He et al., 2016), AugMix without JSD (Hendrycks et al., 2020), RandAugment (Cubuk et al., 2020), and AutoAugment (Cubuk et al., 2019). Performance depends on how well the heuristic augmentations approximate the true generative model. + +Learned data augmentation. These approaches approximate the true underlying generative model $p(x|y^{1:K})$ by learning augmentations conditioned on the nuisance attribute. The learned augmentations can be used to transform any image $x$ to have a new attribute, while keeping the other attributes fixed. We follow Goel et al. (2020), who use CYCLEGAN (Zhu et al., 2017), but we do not use their SGDRO objective in order to evaluate the performance of learned data augmentation alone. Performance depends on how well the learned augmentations approximate the true generative model. + +Domain generalization. These approaches aim to recover a representation $z$ that is independent of the attribute: $p(y^a, z) = p(y^a)p(z)$ to allow generalization over that attribute. We evaluate IRM (Arjovsky et al., 2019), DeepCORAL (Sun & Saenko, 2016), domain MixUp (Gulrajani & Lopez-Paz, 2021), DANN (Ganin et al., 2016), and SagNet (Nam et al., 2021). Performance depends on the invariance of the learned representation $z$ . + +Adaptive approaches. These works modify $p_{\mathrm{reweight}}$ dynamically. We evaluate JTT (Liu et al., 2021) and BN-Adapt (Schneider et al., 2020). These methods do not give performance guarantees. + +Representation learning. These works aim to learn a robust representation of $z$ that describes the true prior. We evaluate using a $\beta$ -VAE (Higgins et al., 2017a) and pretraining on ImageNet (Deng et al., 2009). Performance depends on the quality of the learned representation for the specific task. + +# 4 EXPERIMENTS + +We first introduce the datasets and experimental setup. We evaluate the 19 different methods across these six datasets, three distribution shifts, varying label noise, and dataset size. We plot aggregate results in figures 3-7 and complete results in the appendix in figures 10-12. We discuss the results by distilling them into seven concrete takeaways in section 4.1 and four practical tips in section 4.2. + +Datasets. We evaluate these approaches on six vision, classification datasets - DSPRITES (Matthey et al., 2017), MPI3D (Gondal et al., 2019), SMALLNORB (LeCun et al., 2004), SHAPES3D (Burgess & Kim, 2018), CAMELYON17 (Koh et al., 2020; Bandi et al., 2018), and IWILDCAM (Koh et al., 2020; Beery et al., 2018). These datasets consist of multiple (potentially an arbitrarily large number) attributes. We select two attributes $y^{l}$ , $y^{a}$ for each dataset and make one $y^{l}$ the label. We then use these two attributes to build the three shifts. Visualizations of samples from the datasets are given in figure 2 and further description in appendix D.1. We discuss precisely how we set up the shifts, choose the attributes, and additional conditions for these datasets in appendix D.2. + +Model selection. When investigating heuristic data augmentation, domain generalization, learned augmentation, adaptive approaches, and representation learning, we use a ResNet18 for the simpler, synthetic datasets (DSPRITES, MPI3D, SHAPES3D, and SMALLNORB) but a ResNet50 for the + +![](images/4064755222727fbccc08c91392f844ad1c20565269b5596c8251c75d06bdc6ae.jpg) +Figure 5: Unseen data shift. We rank the methods (where best is 1, worst 19) for each dataset and seed and plot the rankings, with the overall median rank as the black bar. Pretraining on ImageNet and ImageNet augmentation perform consistently best. DANN, CycleGAN and other heuristic augmentations perform consistently well. + +more complex, real world ones (CAMELYON17 and IWILDCAM). To perform model selection, we choose the best model according to the validation set which matches the distribution of the test set. In the unseen data shift setting for the CAMELYON17 and IWILDCAM, we use the given out-of-distribution validation set, which is a different, distinct set in $\mathcal{D}$ that is independent of $\mathcal{D}_{\mathrm{train}}, \mathcal{D}_{\mathrm{test}}$ . (We consider using the in-distribution validation set in appendix B.4.) + +Hyperparameter choices. We perform a sweep over the hyperparameters (the precise sweeps are given in appendix F.8). We run each set of hyperparameters for five seeds for each setting. To choose the best model for each seed, we perform model selection over all hyperparameters using the top-1 accuracy on the validation set. In the low-data and spurious correlation settings, we choose a different set of samples from the low-data region with each seed. We report the mean and standard deviation over the five seeds. + +# 4.1 TAKEAWAYS + +Takeaway 1: While we can improve over ERM, no one method always performs best. The relative performance between methods varies across datasets and shifts. Under spurious correlation (figure 3), CYCLEGAN consistently performs best but in figure 4, under low-data drift, pretraining consistently performs best. Under unseen data shift (figure 5), pretraining is again one of the best performing models. However, if we drill down on the results in figure 10 (appendix B.1), we can see pretraining performs best on the synthetic datasets, but not on CAMELYON17 (where using augmentation or DANN is best) or IWILDCAM (where using ViT or an MLP is best). + +Takeaway 2: Pretraining is a powerful tool across different shifts and datasets. While pretraining is not always helpful (e.g. in appendix B.1 on CAMELYON17 in figures 10-11, IWILDCAM in figures 10-11), it often provides a strong boost in performance. This is presumably because the representation $z$ learned during pretraining is helpful for the downstream task. For example, the representation may have been trained to be invariant to certain useful properties (e.g. scale, shift, and color). If these properties are useful on the downstream tasks, then the learned representation should improve generalization. + +Takeaway 3: Heuristic augmentation improves generalization if the augmentation describes an attribute. In all settings (figures 3-5), ImageNet augmentation generally improves performance. However, RandAugment, AugMix, and AutoAugment have more variable performance (as further shown in figures 10-12). These methods are compositions of different augmentations. We investigate the impact of each augmentation in RandAugment in appendix B.2 and find variable performance. Augmentations that approximate the true underlying generative model $p(\pmb{x}|\pmb{y}^{1:K})$ lead to the best results; otherwise, the model may waste capacity. For example, on CAMELYON17 (which consists of cell images), color jitter harms performance but on SHAPES3D and MPI3D it is essential. + +Takeaway 4: Learned data augmentation is effective across different conditions and distribution shifts. This approach is highly effective in the spurious correlation setting (figure 3). It can also help in the low-data and unseen data shift settings (figure 4,5) (though the gains for these two shifts are not as large as for pretraining). The effectiveness of this approach can be explained by the fact that if the augmentations are learned perfectly, then augmented samples by design are from the true underlying generative model and can cover missing parts of the distribution. + +Takeaway 5: Domain generalization algorithms offer limited performance improvement. In some cases these methods (in particular DANN) do improve performance, most notably in the low-data drift and unseen data shift settings (figures 4-5). However, this depends on the dataset (see figures 10-12) and performance is rarely much better than using heuristic augmentation. + +![](images/1b7fef7ee006a2361949a9cc0a144f73335cf75b1cedebeee6e86f5bf963ee45.jpg) +Figure 6: Condition 1: Noisy labels. We vary the amount of noise $p$ in the labels. We plot the percentage change over the baseline ResNet, averaged over all seeds and datasets. + +![](images/f666520c87abb3bce27d5d56d2570a43e2d5846682f74e19e0d9260e913540a1.jpg) +Figure 7: Condition 2: Fixed data. We vary the total size of the dataset $T$ . We plot the percentage change over the baseline ResNet, averaged over all seeds and datasets. + +Takeaway 6: The best algorithms may differ under the precise conditions. When labels have varying noise in figure 6, relative performance is reasonably consistent. When the dataset size decreases in figure 7, heuristic augmentation methods perform poorly. However, using pretraining and learned augmentation is consistently robust. + +Takeaway 7: The precise attributes we consider directly impacts the results. For example, on DSPRITES, if we make color $y^{l}$ and shape $y^{a}$ , we find that all methods generalise perfectly in the unseen data shift setting (as demonstrated in appendix B.3) unlike when shape is $y^{l}$ (figure 10). + +# 4.2 PRACTICAL TIPS + +While there is no free lunch in terms of the method to choose, we recommend the following tips. + +Tip 1: If heuristic augmentations approximate part of the true underlying generative model, use them. Under this constraint, heuristic augmentations can significantly improve performance; this should be a first point of call. How to heuristically choose these augmentations without exhaustively trying all possible combinations is an open research question. + +Tip 2: If heuristic augmentations do not help, learn the augmentation. If the true underlying generative model cannot be readily approximated with heuristic techniques, but some subset of the generative model can be learned by conditioning on known attributes, this is a promising way to further improve performance. How to learn the underlying generative model directly from data and use this for augmentation is a promising area to explore more thoroughly. + +Tip 3: Use pretraining. In general, pretraining was found to be a useful way to learn a robust representation. While this was not true for all datasets (e.g. CAMELYON17, IWILDCAM), performance could be dramatically improved by pretraining (DSPRITES, MPI3D, SMALLNORB, SHAPES3D). An area to be investigated is the utility of self-supervised pre-training. + +Tip 4: More complex approaches lead to limited improvements. Domain generalization, adaptive approaches and disentangling lead to limited improvements, if any, across the different datasets and shifts. Of these approaches, DANN performs generally best. How to make these approaches generically useful for robustness is still an open research question. + +# 5 DISCUSSION + +Our experiments demonstrate that no one method performs best over all shifts and that performance is dependent on the precise attribute being considered. This leads to the following considerations. + +There is no way to decide a-priori on the best method given only the dataset. It would be helpful for practitioners to be able to select the best approaches without requiring comprehensive evaluations and comparisons. Moreover, it is unclear how to pinpoint the precise distribution shift (and thereby methods to explore) in a given application. This should be an important future area of investigation. + +We should focus on the cases where we have knowledge about the distribution shift. We found that the ability of a given algorithm to generalize depends heavily on the attribute and dataset being + +considered. Instead of trying to make one algorithm for any possible shift, it makes sense to have adaptable algorithms which can use auxiliary information if given. Moreover, algorithms should be evaluated in the context for which we will use them. + +It is pivotal to evaluate methods in a variety of conditions. Performance varies due to the number of examples, amount of noise, and size of the dataset. Thus it is important to perform comprehensive evaluations when comparing different methods, as in our framework. This gives others a more realistic view of different models' relative performance in practice. + +# 6 RELATED WORK + +We briefly summarize benchmarks on distribution shift, leaving a complete review to appendix C. + +Benchmarking robustness to out of distribution (OOD) generalization. While a multitude of methods exist that report improved OOD generalization, Gulrajani & Lopez-Paz (2021) found that in actuality no evaluated method performed significantly better than a strong ERM baseline on a variety of datasets. However, Hendrycks et al. (2021) found that, when we focus on better augmentation, larger models and pretraining, we can get a sizeable boost in performance. This can be seen on the Koh et al. (2020) benchmark (the largest boosts come from larger models and better augmentation). Our work is complementary to these methods, as we look at a range of approaches (pretraining, heuristic augmentation, learned augmentation, domain generalisation, adaptive, disentangled representations) on a range of both synthetic and real-world datasets. Moreover, we allow for a fine-grained analysis of methods over different distribution shifts. + +Benchmarking spurious correlation and low-data drift. Studies on fairness and bias (surveyed by Mehrabi et al. (2021)) have demonstrated the pernicious impact of low-data in face recognition (Buolamwini & Gebru, 2018), medical imaging (Castro et al., 2020), and conservation (Beery et al., 2018) and spurious correlation in classification (Geirhos et al., 2019) and conservation (Beery et al., 2020). Arjovsky et al. (2019) hypothesized that spurious correlation may be the underlying reason for poor generalization of models to unseen data. To our knowledge, there has been no large scale work focused on understanding the benefits of different methods across these distribution shifts systematically across multiple datasets and with fine-grained control on the amount of shift. Here we introduce a framework for creating these shifts in a controllable way to allow such challenges to be investigated robustly. + +Benchmarking disentangled representations. A related area, disentangled representation learning, aims to learn a representation where the factors of variation in the data are separated. If this could be achieved, then models should be able to generalise effortlessly to unseen data as investigated in multiple settings such as reinforcement learning (Higgins et al., 2017b). Despite many years of work in disentangled representations (Higgins et al., 2017a; Burgess et al., 2017; Kim & Mnih, 2018; Chen et al., 2018), a benchmark study by Locatello et al. (2019) found that, without supervision or implicit model or data assumptions, one cannot reliably perform disentanglement; however, weak supervision appears sufficient to do so (Locatello et al., 2020). Dittadi et al. (2021); Schott et al. (2021); Montero et al. (2020) further investigated whether representations (disentangled or not) can interpolate, extrapolate, or compose properties; they found that when considering complex combinations of properties and multiple datasets, representations do not do so reliably. + +# 7 CONCLUSIONS + +This work has put forward a general, comprehensive framework to reason about distribution shifts. We analyzed 19 different methods, spanning a range of techniques, over three distribution shifts – spurious correlation, low-data drift, and unseen data shift, and two additional conditions – label noise and dataset size. We found that while results are not consistent across datasets and methods, a number of methods do better than an ERM baseline in some settings. We then put forward a number of practical tips, promising directions, and open research questions. We hope that our framework and comprehensive benchmark spurs research on in this area and provides a useful tool for practitioners to evaluate which methods work best under which conditions and shifts. + +# ACKNOWLEDGMENTS + +The authors thank Irina Higgins and Timothy Mann for feedback and discussions while developing their work. They also thank Irina, Rosemary Ke, and Dilan Gorur for reviewing earlier drafts. + +# REFERENCES + +Ehab A AlBadawy, Ashirbani Saha, and Maciej A Mazurowski. Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing. Medical physics, 2018. +Michael A Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-Shinn Ku, and Anh Nguyen. Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. +Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019. +Peter Bandi, Oscar Geessink, Quirine Manson, Marcory Van Dijk, Maschenka Balkenhol, Meyke Hermsen, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, et al. From detection of individual metastases to classification of lymph node status at the patient level: the CAMELYON17 challenge. IEEE Transactions on Medical Imaging, 2018. +Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In Proceedings of the European Conference on Computer Vision, 2018. +Sara Beery, Yang Liu, Dan Morris, Jim Piavis, Ashish Kapoor, Neel Joshi, Markus Meister, and Pietro Perona. Synthetic examples improve generalization for rare classes. In Proceedings of the IEEE Workshop on Applications of Computer Vision, 2020. +Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, 2018. +Chris Burgess and Hyunjik Kim. 3D shapes dataset. https://github.com/deepmind/3dshapes-dataset/, 2018. +Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in $\beta$ -VAE. In Workshop on Learning Disentangled Representations at the 31st Conference on Neural Information Processing Systems, 2017. +Fabio M Carlucci, Paolo Russo, Tatiana Tommasi, and Barbara Caputo. Hallucinating agnostic images to generalize across domains. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019. +Daniel C Castro, Ian Walker, and Ben Glocker. Causality matters in medical imaging. Nature Communications, 2020. +Ricky TQ Chen, Xuechen Li, Roger Grosse, and David Duvenaud. Isolating sources of disentangle-ment in variational autoencoders. In Advances in Neural Information Processing Systems, 2018. +Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, 2016. +Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Star-Gan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. +Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. AutoAugment: Learning augmentation strategies from data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. + +Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. RandAugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2020. +Dengxin Dai and Luc Van Gool. Dark model adaptation: Semantic image segmentation from daytime to nighttime. In International Conference on Intelligent Transportation Systems, 2018. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2009. +Andrea Dittadi, Frederik Träuble, Francesco Locatello, Manuel Wüthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, and Bernhard Schölkopf. On the transfer of disentangled representations in realistic settings. In Proceedings of the International Conference on Learning Representations, 2021. +Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In Proceedings of the International Conference on Learning Representations, 2021. +Bradley J Erickson, Panagiotis Korfiatis, Zeynettin Akkus, and Timothy L Kline. Machine learning for medical imaging. Radiographics, 2017. +Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the International Conference on Machine Learning, 2017. +Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1):2096-2030, 2016. +Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In Proceedings of the International Conference on Learning Representations, 2019. +Karan Goel, Albert Gu, Yixuan Li, and Christopher Ré. Model patching: Closing the subgroup performance gap with data augmentation. arXiv preprint arXiv:2008.06775, 2020. +Muhammad Waleed Gondal, Manuel Wüthrich, Dorde Miladinović, Francesco Locatello, Martin Breidt, Valentin Volchkov, Joel Akpo, Olivier Bachem, Bernhard Schölkopf, and Stefan Bauer. On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset. arXiv preprint arXiv:1906.03292, 2019. +Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, 2014. +Sven Gowal, Chongli Qin, Po-Sen Huang, Taylan Cemgil, Krishnamurthy Dvijotham, Timothy Mann, and Pushmeet Kohli. Achieving robustness in the wild via adversarial mixing with disentangled representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020. +Keren Gu, Xander Masotto, Vandana Bachani, Balaji Lakshminarayanan, Jack Nikodem, and Dong Yin. An instance-dependent simulation framework for learning with label noise. arXiv preprint arXiv:2107.11413, 2021. +Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In Proceedings of the International Conference on Learning Representations, 2021. +Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Advances in Neural Information Processing Systems, 2018. + +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. +Will Douglas Heaven. Google's medical ai was super accurate in a lab. real life was a different story., 2020. +Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In Proceedings of the International Conference on Learning Representations, 2019. +Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin Gimpel. Using trusted data to train deep networks on labels corrupted by severe noise. In Advances in Neural Information Processing Systems, 2018. +Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. In Proceedings of the International Conference on Machine Learning, 2019. +Dan Hendrycks, Norman Mu, Ekin D Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. AugMix: A simple data processing method to improve robustness and uncertainty. In Advances in Neural Information Processing Systems, 2020. +Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. Proceedings of the International Conference on Computer Vision, 2021. +Irina Higgins, Loic Matthew, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. $\beta$ -VAE: Learning basic visual concepts with a constrained variational framework. In Proceedings of the International Conference on Learning Representations, 2017a. +Irina Higgins, Arka Pal, Andrei Rusu, Loic Matthew, Christopher Burgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, and Alexander Lerchner. Darla: Improving zero-shot transfer in reinforcement learning. In Proceedings of the International Conference on Machine Learning, 2017b. +Joel Janai, Fatma Güney, Aseem Behl, Andreas Geiger, et al. Computer vision for autonomous vehicles: Problems, datasets and state of the art. Foundations and Trends® in Computer Graphics and Vision, 2020. +Fredrik D Johansson, David Sontag, and Rajesh Ranganath. Support and invertibility in domain-invariant representations. In The International Conference on Artificial Intelligence and Statistics. PMLR, 2019. +John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berhammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. Highly accurate protein structure prediction with AlphaFold. Nature, 2021. +Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. +Ashish Khetan, Zachary C Lipton, and Anima Anandkumar. Learning from noisy singly-labeled data. In Proceedings of the International Conference on Learning Representations, 2018. +Hyunjik Kim and Andriy Mnih. Disentangling by factorising. In Proceedings of the International Conference on Machine Learning, 2018. + +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. +Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. WILDS: A benchmark of in-the-wild distribution shifts. arXiv preprint arXiv:2012.07421, 2020. +Yann LeCun, Fu Jie Huang, and Léon Bottou. Learning methods for generic object recognition with invariance to pose and lighting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2004. +Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In Proceedings of the International Conference on Computer Vision, 2017. +Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, and Dacheng Tao. Deep domain generalization via conditional invariant adversarial networks. In Proceedings of the European Conference on Computer Vision, pp. 624-639, 2018. +Evan Z Liu, Behzad Haghloo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just Train Twice: Improving group robustness without training group information. In Proceedings of the International Conference on Machine Learning, 2021. +Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In Proceedings of the International Conference on Machine Learning, 2019. +Francesco Locatello, Ben Poole, Gunnar Ratsch, Bernhard Schölkopf, Olivier Bachem, and Michael Tschannen. Weakly-supervised disentanglement without compromises. In Proceedings of the International Conference on Machine Learning, 2020. +Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning, 2015. +Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In Proceedings of the International Conference on Machine Learning, 2017. +Loic Matthew, Irina Higgins, Demis Hassabis, and Alexander Lerchner. dsprites: Disentanglement testing sprites dataset. https://github.com/deepmind/dsprites-dataset/, 2017. +Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 2021. +Milton Llera Montero, Casimir JH Ludwig, Rui Ponte Costa, Gaurav Malhotra, and Jeffrey Bowers. The role of disentanglement in generalisation. In Proceedings of the International Conference on Learning Representations, 2020. +Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the International Conference on Machine Learning, 2010. +Hyeonseob Nam, HyunJae Lee, Jongchan Park, Wonjun Yoon, and Donggeun Yoo. Reducing domain gap by reducing style bias. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021. +Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. +Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the International Conference on Computer Vision, 2019. + +Christian S Perone, Pedro Ballester, Rodrigo C Barros, and Julien Cohen-Adad. Unsupervised domain adaptation for medical imaging segmentation with self-ensembling. NeuroImage, 2019. +Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021. +Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize toImagenet? In Proceedings of the International Conference on Machine Learning, 2019. +Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the International Conference on Machine Learning, 2014. +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 2015. +Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. Proceedings of the International Conference on Learning Representations, 2020. +Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, and Matthias Bethge. Improving robustness against common corruptions by covariate shift adaptation. In Proceedings of the International Conference on Learning Representations, 2020. +Lukas Schott, Julius von Kugelgen, Frederik Trauble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, and Wieland Brendel. Visual representation learning does not generalize strongly within the same domain. In Proceedings of the International Conference on Learning Representations, 2021. +Vaishaal Shankar, Achal Dave, Rebecca Roelofs, Deva Ramanan, Benjamin Recht, and Ludwig Schmidt. Do image classifiers generalize across time? arXiv preprint arXiv:1906.02168, 2019. +Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In Proceedings of the European Conference on Computer Vision, 2016. +Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt. Measuring robustness to natural distribution shifts in image classification. arXiv preprint arXiv:2007.00644, 2020. +Antonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2011. +Vladimir Vapnik. Principles of risk minimization for learning theory. In Advances in Neural Information Processing Systems, 1992. +Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. +Yufei Wang, Haoliang Li, and Alex C Kot. Heterogeneous domain generalization via domain mixup. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 2020. +Kai Xiao, Logan Engstrom, Andrew Ilyas, and Aleksander Madry. Noise or Signal: The role of image backgrounds in object recognition. *ArXiv preprint arXiv*: 2006.09994, 2020. +Cihang Xie and Alan Yuille. Intriguing properties of adversarial training at scale. In Proceedings of the International Conference on Learning Representations, 2020. + +Minghao Xu, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, and Wenjun Zhang. Adversarial domain adaptation with domain mixup. In AAAI Conference on Artificial Intelligence, 2020. +Shen Yan, Huan Song, Nanxiang Li, Lincan Zou, and Liu Ren. Improve unsupervised domain adaptation with mixup training. arXiv preprint arXiv:2001.00677, 2020. +Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. MixUp: Beyond empirical risk minimization. In Proceedings of the International Conference on Learning Representations, 2018. +Ling Zhang, Xiaosong Wang, Dong Yang, Thomas Sanford, Stephanie Harmon, Baris Turkbey, Holger Roth, Andriy Myronenko, Daguang Xu, and Ziyue Xu. When unseen domain generalization is unnecessary? rethinking data augmentation. arXiv preprint arXiv:1906.03347, 2019. +Han Zhao, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. On learning invariant representations for domain adaptation. In Proceedings of the International Conference on Machine Learning, 2019. +Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. Deep domain-adversarial image generation for domain generalisation. In AAAI Conference on Artificial Intelligence, 2020. +Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the International Conference on Computer Vision, 2017. \ No newline at end of file diff --git a/afinegrainedanalysisondistributionshift/images.zip b/afinegrainedanalysisondistributionshift/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..051e510494f038273d4155f158d9dc51a2f573a5 --- /dev/null +++ b/afinegrainedanalysisondistributionshift/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97dcedcd6d57ec01fed8ad7150329aa6e8c614581cf1fec85be4c1b441a3e4d7 +size 193026 diff --git a/afinegrainedanalysisondistributionshift/layout.json b/afinegrainedanalysisondistributionshift/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..59a94222a86fc09f362466770c2c9f84df7f04ab --- /dev/null +++ b/afinegrainedanalysisondistributionshift/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:60ae21a747b81f712eea3493e2a63ea273fcb949a9efad3a66a7fca9bc98a8fe +size 494799 diff --git a/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_content_list.json b/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..30be5f28dee7db971bd29a385d9940af5f83c608 --- /dev/null +++ b/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e005c5b6343a6dcba666743b5c9a9ad68e4b2cf624fcf491cd980d95816fe8e +size 264264 diff --git a/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_model.json b/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..adddb6829299b6e807754bc4f54a20f933d8e9b5 --- /dev/null +++ b/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9529eef91c725af5e277e1edc4afbb6cc4ce7f3cbd08b361c6109cdc2ce53dd +size 308923 diff --git a/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_origin.pdf b/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..046c7d08ce0dc84eba207bcf85305f350cafec46 --- /dev/null +++ b/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f4826084c9cb875f0fa2faaa82c8d73bf14d57fb4ea3b092d8fcf66ad88b5ae +size 7954291 diff --git a/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/full.md b/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..47c872f2bee61cedc470d87b9eec6188c9c95e39 --- /dev/null +++ b/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/full.md @@ -0,0 +1,1303 @@ +# ANALYTIC-DPM: AN ANALYTIC ESTIMATE OF THE OPTIMAL REVERSE VARIANCE IN DIFFUSION PROB-ABILISTIC MODELS + +Fan Bao $^{1*}$ , Chongxuan Li $^{23\dagger}$ , Jun Zhu $^{1\dagger}$ , Bo Zhang $^{1}$ + +$^{1}$ Dept. of Comp. Sci. & Tech., Institute for AI, Tsinghua-Huawei Joint Center for AI BNRist Center, State Key Lab for Intell. Tech. & Sys., Tsinghua University, Beijing, China $^{2}$ Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China $^{3}$ Beijing Key Laboratory of Big Data Management and Analysis Methods, Beijing, China bf19@mails.tsinghua.edu.cn, chongxuanli1991@gmail.com, {dcszj, dcszb}@tsinghua.edu.cn + +# ABSTRACT + +Diffusion probabilistic models (DPMs) represent a class of powerful generative models. Despite their success, the inference of DPMs is expensive since it generally needs to iterate over thousands of timesteps. A key problem in the inference is to estimate the variance in each timestep of the reverse process. In this work, we present a surprising result that both the optimal reverse variance and the corresponding optimal KL divergence of a DPM have analytic forms w.r.t. its score function. Building upon it, we propose Analytic-DPM, a training-free inference framework that estimates the analytic forms of the variance and KL divergence using the Monte Carlo method and a pretrained score-based model. Further, to correct the potential bias caused by the score-based model, we derive both lower and upper bounds of the optimal variance and clip the estimate for a better result. Empirically, our analytic-DPM improves the log-likelihood of various DPMs, produces high-quality samples, and meanwhile enjoys a $20 \times$ to $80 \times$ speed up. + +# 1 INTRODUCTION + +A diffusion process gradually adds noise to a data distribution over a series of timesteps. By learning to reverse it, diffusion probabilistic models (DPMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2020b) define a data generative process. Recently, it is shown that DPMs are able to produce high-quality samples (Ho et al., 2020; Nichol & Dhariwal, 2021; Song et al., 2020b; Dhariwal & Nichol, 2021), which are comparable or even superior to the current state-of-the-art GAN models (Goodfellow et al., 2014; Brock et al., 2018; Wu et al., 2019; Karras et al., 2020b). + +Despite their success, the inference of DPMs (e.g., sampling and density evaluation) often requires to iterate over thousands of timesteps, which is two or three orders of magnitude slower (Song et al., 2020a) than other generative models such as GANs. A key problem in the inference is to estimate the variance in each timestep of the reverse process. Most of the prior works use a handcrafted value for all timesteps, which usually run a long chain to obtain a reasonable sample and density value (Nichol & Dhariwal, 2021). Nichol & Dhariwal (2021) attempt to improve the efficiency of sampling by learning a variance network in the reverse process. However, it still needs a relatively long trajectory to get a reasonable log-likelihood (see Appendix E in Nichol & Dhariwal (2021)). + +In this work, we present a surprising result that both the optimal reverse variance and the corresponding optimal KL divergence of a DPM have analytic forms w.r.t. its score function (i.e., the gradient of a log density). Building upon it, we propose Analytic-DPM, a training-free inference framework to improve the efficiency of a pretrained DPM while achieving comparable or even superior performance. Analytic-DPM estimates the analytic forms of the variance and KL divergence using the Monte Carlo method and the score-based model in the pretrained DPM. The corresponding trajectory is calculated via a dynamic programming algorithm (Watson et al., 2021). Further, to + +correct the potential bias caused by the score-based model, we derive both lower and upper bounds of the optimal variance and clip its estimate for a better result. Finally, we reveal an interesting relationship between the score function and the data covariance matrix. + +Analytic-DPM is applicable to a variety of DPMs (Ho et al., 2020; Song et al., 2020a; Nichol & Dhariwal, 2021) in a plug-and-play manner. Empirically, Analytic-DPM consistently improves the log-likelihood of these DPMs and meanwhile enjoys a $20 \times$ to $40 \times$ speed up. Besides, Analytic-DPM also consistently improves the sample quality of DDIMs (Song et al., 2020a) and requires up to 50 timesteps (which is a $20 \times$ to $80 \times$ speed up compared to the full timesteps) to achieve a comparable FID to the corresponding baseline. + +# 2 BACKGROUND + +Diffusion probabilistic models (DPMs) firstly construct a forward process $q(\pmb{x}_{1:N}|\pmb{x}_0)$ that injects noise to a data distribution $q(\pmb{x}_0)$ , and then reverse the forward process to recover it. Given a forward noise schedule $\beta_n\in (0,1),n = 1,\dots ,N$ , denoising diffusion probabilistic models (DDPMs) (Ho et al., 2020) consider a Markov forward process: + +$$ +q _ {\mathrm {M}} \left(\boldsymbol {x} _ {1: N} \mid \boldsymbol {x} _ {0}\right) = \prod_ {n = 1} ^ {N} q _ {\mathrm {M}} \left(\boldsymbol {x} _ {n} \mid \boldsymbol {x} _ {n - 1}\right), \quad q _ {\mathrm {M}} \left(\boldsymbol {x} _ {n} \mid \boldsymbol {x} _ {n - 1}\right) = \mathcal {N} \left(\boldsymbol {x} _ {n} \mid \sqrt {\alpha_ {n}} \boldsymbol {x} _ {n - 1}, \beta_ {n} \boldsymbol {I}\right), \tag {1} +$$ + +where $\pmb{I}$ is the identity matrix, $\alpha_{n}$ and $\beta_{n}$ are scalars and $\alpha_{n}:= 1 - \beta_{n}$ . Song et al. (2020a) introduce a more general non-Markov process indexed by a non-negative vector $\lambda = (\lambda_1,\dots ,\lambda_N)\in \mathbb{R}_{\geq 0}^N$ : + +$$ +q _ {\lambda} \left(\boldsymbol {x} _ {1: N} \mid \boldsymbol {x} _ {0}\right) = q _ {\lambda} \left(\boldsymbol {x} _ {N} \mid \boldsymbol {x} _ {0}\right) \prod_ {n = 2} ^ {N} q _ {\lambda} \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}, \boldsymbol {x} _ {0}\right), \tag {2} +$$ + +$$ +q _ {\lambda} \left(\boldsymbol {x} _ {N} \mid \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {x} _ {N} \mid \sqrt {\bar {\alpha} _ {N}} \boldsymbol {x} _ {0}, \bar {\beta} _ {N} \boldsymbol {I}\right), +$$ + +$$ +q _ {\lambda} \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}, \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {x} _ {n - 1} \mid \tilde {\boldsymbol {\mu}} _ {n} \left(\boldsymbol {x} _ {n}, \boldsymbol {x} _ {0}\right), \lambda_ {n} ^ {2} \boldsymbol {I}\right), +$$ + +$$ +\tilde {\pmb {\mu}} _ {n} (\pmb {x} _ {n}, \pmb {x} _ {0}) = \sqrt {\overline {{\alpha}} _ {n - 1}} \pmb {x} _ {0} + \sqrt {\overline {{\beta}} _ {n - 1} - \lambda_ {n} ^ {2}} \cdot \frac {\pmb {x} _ {n} - \sqrt {\overline {{\alpha}} _ {n}} \pmb {x} _ {0}}{\sqrt {\overline {{\beta}} _ {n}}}. +$$ + +Here $\overline{\alpha}_n\coloneqq \prod_{i = 1}^n\alpha_i$ and $\overline{\beta}_n\coloneqq 1 - \overline{\alpha}_n$ . Indeed, Eq. (2) includes the DDPM forward process as a special case when $\lambda_n^2 = \tilde{\beta}_n$ , where $\tilde{\beta}_n\coloneqq \frac{\overline{\beta}_{n - 1}}{\overline{\beta}_n}\beta_n$ . Another special case of Eq. (2) is the denoising diffusion implicit model (DDIM) forward process, where $\lambda_n^2 = 0$ . Besides, we can further derive $q_{\lambda}(\pmb{x}_n|\pmb{x}_0) = \mathcal{N}(\pmb{x}_n|\sqrt{\overline{\alpha}_n}\pmb{x}_0,\overline{\beta}_n\pmb{I})$ , which is independent of $\lambda$ . In the rest of the paper, we will focus on the forward process in Eq. (2) since it is more general, and we will omit the index $\lambda$ and denote it as $q(\pmb{x}_{1:N}|\pmb{x}_0)$ for simplicity. + +The reverse process for Eq. (2) is defined as a Markov process aimed to approximate $q(\pmb{x}_0)$ by gradually denoising from the standard Gaussian distribution $p(\pmb{x}_N) = \mathcal{N}(\pmb{x}_N|\pmb{0},\pmb{I})$ : + +$$ +p (\boldsymbol {x} _ {0: N}) = p (\boldsymbol {x} _ {N}) \prod_ {n = 1} ^ {N} p (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}), \quad p (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}) = \mathcal {N} (\boldsymbol {x} _ {n - 1} | \boldsymbol {\mu} _ {n} (\boldsymbol {x} _ {n}), \sigma_ {n} ^ {2} \boldsymbol {I}), +$$ + +where $\pmb{\mu}_n(\pmb{x}_n)$ is generally parameterized by a time-dependent score-based model $s_n(x_n)$ (Song & Ermon, 2019; Song et al., 2020b): + +$$ +\boldsymbol {\mu} _ {n} \left(\boldsymbol {x} _ {n}\right) = \tilde {\boldsymbol {\mu}} _ {n} \left(\boldsymbol {x} _ {n}, \frac {1}{\sqrt {\bar {\alpha} _ {n}}} \left(\boldsymbol {x} _ {n} + \bar {\beta} _ {n} \boldsymbol {s} _ {n} \left(\boldsymbol {x} _ {n}\right)\right)\right). \tag {3} +$$ + +The reverse process can be learned by optimizing a variational bound $L_{\mathrm{vb}}$ on negative log-likelihood: + +$$ +L _ {\mathrm {v b}} = \mathbb {E} _ {q} \left[ - \log p (\pmb {x} _ {0} | \pmb {x} _ {1}) + \sum_ {n = 2} ^ {N} D _ {\mathrm {K L}} (q (\pmb {x} _ {n - 1} | \pmb {x} _ {0}, \pmb {x} _ {n}) | | p (\pmb {x} _ {n - 1} | \pmb {x} _ {n})) + D _ {\mathrm {K L}} (q (\pmb {x} _ {N} | \pmb {x} _ {0}) | | p (\pmb {x} _ {N})) \right], +$$ + +which is equivalent to optimizing the KL divergence between the forward and the reverse process: + +$$ +\min _ {\left\{\boldsymbol {\mu} _ {n}, \sigma_ {n} ^ {2} \right\} _ {n = 1} ^ {N}} L _ {\mathrm {v b}} \Leftrightarrow \min _ {\left\{\boldsymbol {\mu} _ {n}, \sigma_ {n} ^ {2} \right\} _ {n = 1} ^ {N}} D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {0: N}) | | p (\boldsymbol {x} _ {0: N})). \tag {4} +$$ + +To improve the sample quality in practice, instead of directly optimizing $L_{\mathrm{vb}}$ , Ho et al. (2020) consider a reweighted variant of $L_{\mathrm{vb}}$ to learn $s_n(\pmb{x}_n)$ : + +$$ +\min _ {\left\{\boldsymbol {s} _ {n} \right\} _ {n = 1} ^ {N}} \mathbb {E} _ {n} \bar {\beta} _ {n} \mathbb {E} _ {q _ {n} (\boldsymbol {x} _ {n})} \left\| \boldsymbol {s} _ {n} (\boldsymbol {x} _ {n}) - \nabla_ {\boldsymbol {x} _ {n}} \log q _ {n} (\boldsymbol {x} _ {n}) \right\| ^ {2} = \mathbb {E} _ {n, \boldsymbol {x} _ {0}, \boldsymbol {\epsilon}} \| \boldsymbol {\epsilon} + \sqrt {\overline {{\beta}} _ {n}} \boldsymbol {s} _ {n} (\boldsymbol {x} _ {n}) \| ^ {2} + c, \tag {5} +$$ + +where $n$ is uniform between 1 and $N$ , $q_{n}(\pmb{x}_{n})$ is the marginal distribution of the forward process at timestep $n$ , $\epsilon$ is a standard Gaussian noise, $\pmb{x}_{n}$ on the right-hand side is reparameterized by $\pmb{x}_{n} = \sqrt{\overline{\alpha}_{n}}\pmb{x}_{0} + \sqrt{\overline{\beta}_{n}}\pmb{\epsilon}$ and $c$ is a constant only related to $q$ . Indeed, Eq. (5) is exactly a weighted sum of score matching objectives (Song & Ermon, 2019), which admits an optimal solution $s_{n}^{*}(\pmb{x}_{n}) = \nabla_{\pmb{x}_{n}}\log q_{n}(\pmb{x}_{n})$ for all $n \in \{1,2\dots ,N\}$ . + +Note that Eq. (5) provides no learning signal for the variance $\sigma_n^2$ . Indeed, $\sigma_n^2$ is generally handcrafted in most of prior works. In DDPMs (Ho et al., 2020), two commonly used settings are $\sigma_n^2 = \beta_n$ and $\sigma_n^2 = \tilde{\beta}_n$ . In DDIMs, Song et al. (2020a) consistently use $\sigma_n^2 = \lambda_n^2$ . We argue that these handcrafted values are not the true optimal solution of Eq. (4) in general, leading to a suboptimal performance. + +# 3 ANALYTIC ESTIMATE OF THE OPTIMAL REVERSE VARIANCE + +For a DPM, we first show that both the optimal mean $\pmb{\mu}_n^*(\pmb{x}_n)$ and the optimal variance $\sigma_n^{*2}$ to Eq. (4) have analytic forms w.r.t. the score function, which is summarized in the following Theorem 1. + +Theorem 1. (Score representation of the optimal solution to Eq. (4), proof in Appendix A.2) + +The optimal solution $\pmb{\mu}_n^*(\pmb{x}_n)$ and $\sigma_n^{*2}$ to Eq. (4) are + +$$ +\boldsymbol {\mu} _ {n} ^ {*} (\boldsymbol {x} _ {n}) = \tilde {\boldsymbol {\mu}} _ {n} \left(\boldsymbol {x} _ {n}, \frac {1}{\sqrt {\bar {\alpha} _ {n}}} \left(\boldsymbol {x} _ {n} + \bar {\beta} _ {n} \nabla_ {\boldsymbol {x} _ {n}} \log q _ {n} (\boldsymbol {x} _ {n})\right)\right), \tag {6} +$$ + +$$ +\sigma_ {n} ^ {* 2} = \lambda_ {n} ^ {2} + \left(\sqrt {\frac {\overline {{\beta}} _ {n}}{\alpha_ {n}}} - \sqrt {\overline {{\beta}} _ {n - 1} - \lambda_ {n} ^ {2}}\right) ^ {2} \left(1 - \overline {{\beta}} _ {n} \mathbb {E} _ {q _ {n} (\boldsymbol {x} _ {n})} \frac {| | \nabla_ {\boldsymbol {x} _ {n}} \log q _ {n} (\boldsymbol {x} _ {n}) | | ^ {2}}{d}\right), \tag {7} +$$ + +where $q_{n}(\pmb{x}_{n})$ is the marginal distribution of the forward process at the timestep $n$ and $d$ is the dimension of the data. + +The proof of Theorem 1 consists of three key steps: + +- The first step (see Lemma 9) is known as the moment matching (Minka, 2013), which states that approximating arbitrary density by a Gaussian density under the KL divergence is equivalent to setting the first two moments of the two densities as the same. To our knowledge, the connection of moment matching and DPMs has not been revealed before. +- In the second step (see Lemma 13), we carefully use the law of total variance conditioned on $\mathbf{x}_0$ and convert the second moment of $q(\mathbf{x}_{n - 1}|\mathbf{x}_n)$ to that of $q(\mathbf{x}_0|\mathbf{x}_n)$ . +- In the third step (see Lemma 11), we surprisingly find that the second moment of $q(\pmb{x}_0|\pmb{x}_n)$ can be represented by the score function, and we plug the score representation into the second moment of $q(\pmb{x}_{n - 1}|\pmb{x}_n)$ to get the final results in Theorem 1. + +The results in Theorem 1 (and other results to appear later) can be further simplified for the DDPM forward process (i.e., $\lambda_n^2 = \tilde{\beta}_n$ , see Appendix D for details). Besides, we can also extend Theorem 1 to DPMs with continuous timesteps (Song et al., 2020b; Kingma et al., 2021), where their corresponding optimal mean and variance are also determined by the score function in an analytic form (see Appendix E.1 for the extension). + +Note that our analytic form of the optimal mean $\pmb{\mu}_n^*(\pmb{x}_n)$ in Eq. (6) and the previous parameterization of $\pmb{\mu}_n(\pmb{x}_n)$ (Ho et al., 2020) in Eq. (3) coincide. The only difference is that Eq. (3) replaces the + +![](images/635d0e9d255f42d5f0fbb70fb4ebe72b3afec3f32f01b732daac3cea3ae7d29d.jpg) +(a) Variance + +![](images/8ef483b263eac0b32ca7c3cc77b344e4b8dd72efa59dccdf3beb44d86afce0f3.jpg) +(b) Terms in $L_{\mathrm{vb}}$ +Figure 1: Comparing our analytic estimate $\hat{\sigma}_n^2$ and prior works with handcrafted variances $\beta_{n}$ and $\tilde{\beta}_{n}$ . (a) compares the values of the variance for different timesteps. (b) compares the term in $L_{\mathrm{vb}}$ corresponding to each timestep. The value of $L_{\mathrm{vb}}$ is the area under the corresponding curve. + +score function $\nabla_{\pmb{x}_n}\log q_n(\pmb{x}_n)$ in Eq. (6) with the score-based model $s_n(\pmb{x}_n)$ . This result explicitly shows that Eq. (5) essentially shares the same optimal mean solution to the $L_{\mathrm{vb}}$ objective, providing a simple and alternative perspective to prior works. + +In contrast to the handcrafted strategies used in (Ho et al., 2020; Song et al., 2020a), Theorem 1 shows that the optimal reverse variance $\sigma_{n}^{*2}$ can also be estimated without any extra training process given a pretrained score-based model $s_n(\pmb{x}_n)$ . In fact, we first estimate the expected mean squared norm of $\nabla_{\pmb{x}_n}\log q_n(\pmb{x}_n)$ by $\Gamma = (\Gamma_1,\dots,\Gamma_N)$ , where + +$$ +\Gamma_ {n} = \frac {1}{M} \sum_ {m = 1} ^ {M} \frac {\left| \left| \boldsymbol {s} _ {n} \left(\boldsymbol {x} _ {n , m}\right) \right| \right| ^ {2}}{d}, \quad \boldsymbol {x} _ {n, m} \stackrel {{i i d}} {{\sim}} q _ {n} (\boldsymbol {x} _ {n}). \tag {8} +$$ + +$M$ is the number of Monte Carlo samples. We only need to calculate $\Gamma$ once for a pretrained model and reuse it in downstream computations (see Appendix H.1 for a detailed discussion of the computation cost of $\Gamma$ ). Then, according to Eq. (7), we estimate $\sigma_{n}^{*2}$ as follows: + +$$ +\hat {\sigma} _ {n} ^ {2} = \lambda_ {n} ^ {2} + \left(\sqrt {\frac {\bar {\beta} _ {n}}{\alpha_ {n}}} - \sqrt {\bar {\beta} _ {n - 1} - \lambda_ {n} ^ {2}}\right) ^ {2} (1 - \bar {\beta} _ {n} \Gamma_ {n}). \tag {9} +$$ + +We empirically validate Theorem 1. In Figure 1 (a), we plot our analytic estimate $\hat{\sigma}_n^2$ of a DDPM trained on CIFAR10, as well as the baselines $\beta_{n}$ and $\tilde{\beta}_{n}$ used by Ho et al. (2020). At small timesteps, these strategies behave differently. Figure 1 (b) shows that our $\hat{\sigma}_n^2$ outperforms the baselines for each term of $L_{\mathrm{vb}}$ , especially at the small timesteps. We also obtain similar results on other datasets (see Appendix G.1). Besides, we show that only a small number of Monte Carlo (MC) samples (e.g., $M = 10$ , 100) is required to achieve a sufficiently small variance caused by MC and get a similar performance to that with a large $M$ (see Appendix G.2). We also discuss the stochasticity of $L_{\mathrm{vb}}$ after plugging $\hat{\sigma}_n^2$ in Appendix H.2. + +# 3.1 BOUNDING THE OPTIMAL REVERSE VARIANCE TO REDUCE BIAS + +According to Eq. (7) and Eq. (9), the bias of the analytic estimate $\hat{\sigma}_n^2$ is + +$$ +\left| \sigma_ {n} ^ {* 2} - \hat {\sigma} _ {n} ^ {2} \right| = \underbrace {\left(\sqrt {\frac {\bar {\beta} _ {n}}{\alpha_ {n}}} - \sqrt {\bar {\beta} _ {n - 1} - \lambda_ {n} ^ {2}}\right) ^ {2} \bar {\beta} _ {n}} _ {\text {C o e f f i c i e n t}} \underbrace {| \Gamma_ {n} - \mathbb {E} _ {q _ {n} \left(\boldsymbol {x} _ {n}\right)} \frac {\left| \left| \nabla_ {\boldsymbol {x} _ {n}} \log q _ {n} \left(\boldsymbol {x} _ {n}\right) \right| \right| ^ {2}}{d} | .} _ {\text {A p p r o x i m a t i o n e r r o r}}. \tag {10} +$$ + +Our estimate of the variance employs a score-based model $s_n(x_n)$ to approximate the true score function $\nabla_{\boldsymbol{x}_n} \log q_n(\boldsymbol{x}_n)$ . Thus, the approximation error in Eq. (10) is irreducible given a pretrained model. Meanwhile, the coefficient in Eq. (10) can be large if we use a shorter trajectory to sample (see details in Section 4), potentially resulting in a large bias. + +To reduce the bias, we derive bounds of the optimal reverse variance $\sigma_{n}^{*2}$ and clip our estimate based on the bounds. Importantly, these bounds are unrelated to the data distribution $q(\pmb{x}_0)$ and hence can be efficiently calculated. We firstly derive both upper and lower bounds of $\sigma_{n}^{*2}$ without any assumption about the data. Then we show another upper bound of $\sigma_{n}^{*2}$ if the data distribution is bounded. We formalize these bounds in Theorem 2. + +Theorem 2. (Bounds of the optimal reverse variance, proof in Appendix A.3) + +$\sigma_{n}^{*2}$ has the following lower and upper bounds: + +$$ +\lambda_ {n} ^ {2} \leq \sigma_ {n} ^ {* 2} \leq \lambda_ {n} ^ {2} + \left(\sqrt {\frac {\bar {\beta} _ {n}}{\alpha_ {n}}} - \sqrt {\bar {\beta} _ {n - 1} - \lambda_ {n} ^ {2}}\right) ^ {2}. \tag {11} +$$ + +If we further assume $q(\pmb{x}_0)$ is a bounded distribution in $[a, b]^d$ , where $d$ is the dimension of data, then $\sigma_n^{*2}$ can be further upper bounded by + +$$ +\sigma_ {n} ^ {* 2} \leq \lambda_ {n} ^ {2} + \left(\sqrt {\bar {\alpha} _ {n - 1}} - \sqrt {\bar {\beta} _ {n - 1} - \lambda_ {n} ^ {2}} \cdot \sqrt {\frac {\bar {\alpha} _ {n}}{\bar {\beta} _ {n}}}\right) ^ {2} \left(\frac {b - a}{2}\right) ^ {2}. \tag {12} +$$ + +Theorem 2 states that the handcrafted reverse variance $\lambda_n^2$ in prior works (Ho et al., 2020; Song et al., 2020a) underestimates $\sigma_n^{*2}$ . For instance, $\lambda_n^2 = \tilde{\beta}_n$ in DDPM. We compare it with our estimate in Figure 1 (a) and the results agree with Theorem 2. Besides, the boundedness assumption of $q(\pmb{x}_0)$ is satisfied in many scenarios including generative modelling of images, and which upper bound among Eq. (11) and Eq. (12) is tighter depends on $n$ . Therefore, we clip our estimate based on the minimum one. Further, we show theses bounds are tight numerically in Appendix G.3. + +# 4 ANALYTIC ESTIMATION OF THE OPTIMAL TRAJECTORY + +The number of full timesteps $N$ can be large, making the inference slow in practice. Thereby, we can construct a shorter forward process $q(\pmb{x}_{\tau_1}, \dots, \pmb{x}_{\tau_K} | \pmb{x}_0)$ constrained on a trajectory $1 = \tau_1 < \dots < \tau_K = N$ of $K$ timesteps (Song et al., 2020a; Nichol & Dhariwal, 2021; Watson et al., 2021), and $K$ can be much smaller than $N$ to speed up the inference. Formally, the shorter process is defined as $q(\pmb{x}_{\tau_1}, \dots, \pmb{x}_{\tau_K} | \pmb{x}_0) = q(\pmb{x}_{\tau_K} | \pmb{x}_0) \prod_{k=2}^{K} q(\pmb{x}_{\tau_{k-1}} | \pmb{x}_{\tau_k}, \pmb{x}_0)$ , where + +$$ +q \left(\boldsymbol {x} _ {\tau_ {k - 1}} \mid \boldsymbol {x} _ {\tau_ {k}}, \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {x} _ {\tau_ {k - 1}} \mid \tilde {\boldsymbol {\mu}} _ {\tau_ {k - 1} \mid \tau_ {k}} \left(\boldsymbol {x} _ {\tau_ {k}}, \boldsymbol {x} _ {0}\right), \lambda_ {\tau_ {k - 1} \mid \tau_ {k}} ^ {2} \boldsymbol {I}\right), \tag {13} +$$ + +$$ +\tilde {\boldsymbol {\mu}} _ {\tau_ {k - 1} | \tau_ {k}} (\boldsymbol {x} _ {\tau_ {k}}, \boldsymbol {x} _ {0}) = \sqrt {\overline {{\alpha}} _ {\tau_ {k - 1}}} \boldsymbol {x} _ {0} + \sqrt {\overline {{\beta}} _ {\tau_ {k - 1}} - \lambda_ {\tau_ {k - 1} | \tau_ {k}} ^ {2}} \cdot \frac {\boldsymbol {x} _ {\tau_ {k}} - \sqrt {\overline {{\alpha}} _ {\tau_ {k}}} \boldsymbol {x} _ {0}}{\sqrt {\overline {{\beta}} _ {\tau_ {k}}}}. +$$ + +The corresponding reverse process is $p(\pmb{x}_0, \pmb{x}_{\tau_1}, \dots, \pmb{x}_{\tau_K}) = p(\pmb{x}_{\tau_K}) \prod_{k=1}^{K} p(\pmb{x}_{\tau_{k-1}} | \pmb{x}_{\tau_k})$ , where + +$$ +p (\pmb {x} _ {\tau_ {k - 1}} | \pmb {x} _ {\tau_ {k}}) = \mathcal {N} (\pmb {x} _ {\tau_ {k - 1}} | \pmb {\mu} _ {\tau_ {k - 1} | \tau_ {k}} (\pmb {x} _ {\tau_ {k}}), \sigma_ {\tau_ {k - 1} | \tau_ {k}} ^ {2} \pmb {I}). +$$ + +According to Theorem 1, the mean and variance of the optimal $p^*(\pmb{x}_{\tau_{k-1}} | \pmb{x}_{\tau_k})$ in the sense of KL minimization is + +$$ +\boldsymbol {\mu} _ {\tau_ {k - 1} | \tau_ {k}} ^ {*} (\boldsymbol {x} _ {\tau_ {k}}) = \tilde {\boldsymbol {\mu}} _ {\tau_ {k - 1} | \tau_ {k}} \left(\boldsymbol {x} _ {\tau_ {k}}, \frac {1}{\sqrt {\overline {{\alpha}} _ {\tau_ {k}}}} (\boldsymbol {x} _ {\tau_ {k}} + \overline {{\beta}} _ {\tau_ {k}} \nabla_ {\boldsymbol {x} _ {\tau_ {k}}} \log q (\boldsymbol {x} _ {\tau_ {k}}))\right), +$$ + +$$ +\sigma_ {\tau_ {k - 1} | \tau_ {k}} ^ {* 2} = \lambda_ {\tau_ {k - 1} | \tau_ {k}} ^ {2} + \left(\sqrt {\frac {\overline {{\beta}} _ {\tau_ {k}}}{\alpha_ {\tau_ {k} | \tau_ {k - 1}}}} - \sqrt {\overline {{\beta}} _ {\tau_ {k - 1}} - \lambda_ {\tau_ {k - 1} | \tau_ {k}} ^ {2}}\right) ^ {2} (1 - \overline {{\beta}} _ {\tau_ {k}} \mathbb {E} _ {q (\boldsymbol {x} _ {\tau_ {k}})} \frac {| | \nabla_ {\boldsymbol {x} _ {\tau_ {k}}} \log q (\boldsymbol {x} _ {\tau_ {k}}) | | ^ {2}}{d}), +$$ + +where $\alpha_{\tau_k|\tau_{k-1}} := \overline{\alpha}_{\tau_k} / \overline{\alpha}_{\tau_{k-1}}$ . According to Theorem 2, we can derive similar bounds for $\sigma_{\tau_{k-1}|\tau_k}^{*2}$ (see details in Appendix C). Similarly to Eq. (9), the estimate of $\sigma_{\tau_{k-1}|\tau_k}^{*2}$ is + +$$ +\hat {\sigma} _ {\tau_ {k - 1} | \tau_ {k}} ^ {2} = \lambda_ {\tau_ {k - 1} | \tau_ {k}} ^ {2} + \left(\sqrt {\frac {\overline {{\beta}} _ {\tau_ {k}}}{\alpha_ {\tau_ {k} | \tau_ {k - 1}}}} - \sqrt {\overline {{\beta}} _ {\tau_ {k - 1}} - \lambda_ {\tau_ {k - 1} | \tau_ {k}} ^ {2}}\right) ^ {2} (1 - \overline {{\beta}} _ {\tau_ {k}} \Gamma_ {\tau_ {k}}), +$$ + +where $\Gamma$ is defined in Eq. (8) and can be shared across different selections of trajectories. Based on the optimal reverse process $p^*$ above, we further optimize the trajectory: + +$$ +\min _ {\tau_ {1}, \dots , \tau_ {K}} D _ {\mathrm {K L}} \left(q \left(\boldsymbol {x} _ {0}, \boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}}\right) \| p ^ {*} \left(\boldsymbol {x} _ {0}, \boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}}\right)\right) = \frac {d}{2} \sum_ {k = 2} ^ {K} J \left(\tau_ {k - 1}, \tau_ {k}\right) + c, \tag {14} +$$ + +where $J(\tau_{k-1}, \tau_k) = \log(\sigma_{\tau_{k-1}|\tau_k}^{*2} / \lambda_{\tau_{k-1}|\tau_k}^2)$ and $c$ is a constant unrelated to the trajectory $\tau$ (see proof in Appendix A.4). The KL in Eq. (14) can be decomposed into $K - 1$ terms and each term has an analytic form w.r.t. the score function. We view each term as a cost function $J$ evaluated at $(\tau_{k-1}, \tau_k)$ , and it can be efficiently estimated by $J(\tau_{k-1}, \tau_k) \approx \log(\hat{\sigma}_{\tau_{k-1}|\tau_k}^2 / \lambda_{\tau_{k-1}|\tau_k}^2)$ , which doesn't require any neural network computation once $\Gamma$ is given. While the logarithmic function causes bias even when the correct score function is known, it can be reduced by increasing $M$ . + +As a result, Eq. (14) is reduced to a canonical least-cost-path problem (Watson et al., 2021) on a directed graph, where the nodes are $\{1,2,\dots ,N\}$ and the edge from $s$ to $t$ has cost $J(s,t)$ . We want to find a least-cost path of $K$ nodes starting from 1 and terminating at $N$ . This problem can be solved by the dynamic programming (DP) algorithm introduced by Watson et al. (2021). We present this algorithm in Appendix B. Besides, we can also extend Eq. (14) to DPMs with continuous timesteps (Song et al., 2020b; Kingma et al., 2021), where their corresponding optimal KL divergences are also decomposed to terms determined by score functions. Thereby, the DP algorithm is also applicable. See Appendix E.2 for the extension. + +# 5 RELATIONSHIP BETWEEN THE SCORE FUNCTION AND THE DATA COVARIANCE MATRIX + +In this part, we further reveal a relationship between the score function and the data covariance matrix. Indeed, the data covariance matrix can be decomposed to the sum of $\mathbb{E}_{q(\boldsymbol{x}_n)}\mathrm{Cov}_{q(\boldsymbol{x}_0|\boldsymbol{x}_n)}[\boldsymbol{x}_0]$ and $\mathrm{Cov}_{q(\boldsymbol{x}_n)}\mathbb{E}_{q(\boldsymbol{x}_0|\boldsymbol{x}_n)}[\boldsymbol{x}_0]$ , where the first term can be represented by the score function. Further, the second term is negligible when $n$ is sufficiently large because $\boldsymbol{x}_0$ and $\boldsymbol{x}_n$ are almost independent. In such cases, the data covariance matrix is almost determined by the score function. Currently, the relationship is purely theoretical and its practical implication is unclear. See details in Appendix A.5. + +# 6 EXPERIMENTS + +We consider the DDPM forward process $(\lambda_n^2 = \tilde{\beta}_n)$ and the DDIM forward process $(\lambda_n^2 = 0)$ , which are the two most commonly used special cases of Eq. (2). We denote our method, which uses the analytic estimate $\sigma_n^2 = \tilde{\sigma}_n^2$ , as Analytic-DDPM, and explicitly call it Analytic-DDPM or Analytic-DDIM according to which forward process is used. We compare our Analytic-DPM with the original DDPM (Ho et al., 2020), where the reverse variance is either $\sigma_n^2 = \tilde{\beta}_n$ or $\sigma_n^2 = \beta_n$ , as well as the original DDIM (Song et al., 2020a), where the reverse variance is $\sigma_n^2 = \lambda_n^2 = 0$ . + +We adopt two methods to get the trajectory for both the analytic-DPM and baselines. The first one is even trajectory (ET) (Nichol & Dhariwal, 2021), where the timesteps are determined according to a fixed stride (see details in Appendix F.4). The second one is optimal trajectory (OT) (Watson et al., 2021), where the timesteps are calculated via dynamic programming (see Section 4). Note that the baselines calculate the OT based on the $L_{\mathrm{vb}}$ with their handcrafted variances (Watson et al., 2021). + +We apply our Analytic-DPM to three pretrained score-based models provided by prior works (Ho et al., 2020; Song et al., 2020a; Nichol & Dhariwal, 2021), as well as two score-based models trained by ourselves. The pretrained score-based models are trained on CelebA 64x64 (Liu et al., 2015), ImageNet 64x64 (Deng et al., 2009) and LSUN Bedroom (Yu et al., 2015) respectively. Our score-based models are trained on CIFAR10 (Krizhevsky et al., 2009) with two different forward noise schedules: the linear schedule (LS) (Ho et al., 2020) and the cosine schedule (CS) (Nichol & Dhariwal, 2021). We denote them as CIFAR10 (LS) and CIFAR10 (CS) respectively. The number of the full timesteps $N$ is 4000 for ImageNet 64x64 and 1000 for other datasets. During sampling, we only display the mean of $p(\boldsymbol{x}_0|\boldsymbol{x}_1)$ and discard the noise following Ho et al. (2020), and we additionally clip the noise scale $\sigma_2$ of $p(\boldsymbol{x}_1|\boldsymbol{x}_2)$ for all methods compared in Table 2 (see details in Appendix F.2 and its ablation study in Appendix G.4). See more experimental details in Appendix F. + +Table 1: Negative log-likelihood (bits/dim) $\downarrow$ under the DDPM forward process. We show results under trajectories of different number of timesteps $K$ . We select the minimum $K$ such that analytic- DPM can outperform the baselines with full timesteps and underline the corresponding results. + +
Model \ # timesteps K1025501002004001000
CIFAR10 (LS)
ETDDPM, σn2= βn74.9524.9812.017.085.034.293.73
DDPM, σn2= βn6.996.115.444.864.394.073.75
Analytic-DDPM5.474.794.384.073.843.713.59
OTDDPM, σn2= βn5.384.343.973.823.773.753.75
Analytic-DDPM4.113.683.613.593.593.593.59
CIFAR10 (CS)
ETDDPM, σn2= βn75.9624.9411.967.044.954.193.60
DDPM, σn2= βn6.515.554.924.414.033.783.54
Analytic-DDPM5.084.454.093.833.643.533.42
OTDDPM, σn2= βn5.514.303.863.653.573.543.54
Analytic-DDPM3.993.563.473.443.433.423.42
CelebA 64x64
ETDDPM, σn2= βn33.4213.097.144.603.453.032.71
DDPM, σn2= βn6.675.724.984.313.743.342.93
Analytic-DDPM4.543.893.483.162.922.792.66
OTDDPM, σn2= βn4.763.583.162.992.942.932.93
Analytic-DDPM2.972.712.672.662.662.662.66
Model \ # timesteps K255010020040010004000
ImageNet 64x64
ETDDPM, σn2= βn105.8746.2522.0212.107.595.043.89
DDPM, σn2= βn5.815.204.704.314.043.813.65
Analytic-DDPM4.784.424.153.953.813.693.61
OTDDPM, σn2= βn4.564.093.843.733.683.653.65
Analytic-DDPM3.833.703.643.623.623.613.61
+ +We conduct extensive experiments to demonstrate that analytic-DPM can consistently improve the inference efficiency of a pretrained DPM while achieving a comparable or even superior performance. Specifically, Section 6.1 and Section 6.2 present the likelihood and sample quality results respectively. Additional experiments such as ablation studies can be found in Appendix G. + +# 6.1 LIKELIHOOD RESULTS + +Since $\lambda_{n}^{2} = 0$ in the DDIM forward process, its variational bound $L_{\mathrm{vb}}$ is infinite. Thereby, we only consider the likelihood results under the DDPM forward process. As shown in Table 1, on all three datasets, our Analytic-DPM consistently improves the likelihood results of the original DDPM using both ET and OT. Remarkably, using a much shorter trajectory (i.e., a much less inference time), Analytic-DPM with OT can still outperform the baselines. In Table 1, we select the minimum $K$ such that analytic-DPM can outperform the baselines with full timesteps and underline the corresponding results. Specifically, analytic-DPM enjoys a $40\times$ speed up on CIFAR10 (LS) and ImageNet 64x64, and a $20\times$ speed up on CIFAR10 (CS) and CelebA 64x64. + +Although we mainly focus on learning-free strategies of choosing the reverse variance, we also compare to another strong baseline that predicts the variance by a neural network (Nichol & Dhariwal, 2021). With full timesteps, Analytic-DPM achieves a NLL of 3.61 on ImageNet 64x64, which is very close to 3.57 reported in Nichol & Dhariwal (2021). Besides, while Nichol & Dhariwal (2021) report that the ET drastically reduces the log-likelihood performance of their neural-network-parameterized variance, Analytic-DPM performs well with the ET. See details in Appendix G.6. + +Table 2: FID $\downarrow$ under the DDPM and DDIM forward processes. All are evaluated under the even trajectory (ET). The result with ${}^{ \dagger }$ is slightly better than 3.17 reported by Ho et al. (2020),because we use an improved model architecture following Nichol & Dhariwal (2021). + +
Model \ # timesteps K1025501002004001000
CIFAR10 (LS)
DDPM, σn2= βn44.4521.8315.2110.948.236.435.11
DDPM, σn2= βn233.41125.0566.2831.3612.964.86†3.04
Analytic-DDPM34.2611.607.255.404.013.624.03
DDIM21.3110.707.746.085.074.614.13
Analytic-DDIM14.005.814.043.553.393.503.74
CIFAR10 (CS)
DDPM, σn2= βn34.7616.1811.118.386.665.654.92
DDPM, σn2= βn205.3184.7137.3514.815.743.403.34
Analytic-DDPM22.948.505.504.454.043.964.31
DDIM34.3416.6810.487.946.695.784.89
Analytic-DDIM26.439.966.024.884.925.004.66
CelebA 64x64
DDPM, σn2= βn36.6924.4618.9614.3110.488.095.95
DDPM, σn2= βn294.79115.6953.3925.659.723.953.16
Analytic-DDPM28.9916.0111.238.086.515.875.21
DDIM20.5413.459.336.604.964.153.40
Analytic-DDIM15.629.226.134.293.463.383.13
Model \ # timesteps K255010020040010004000
ImageNet 64x64
DDPM, σn2= βn29.2121.7119.1217.8117.4816.8416.55
DDPM, σn2= βn170.2883.8645.0428.3921.3817.5816.38
Analytic-DDPM32.5622.4518.8017.1616.4016.1416.34
DDIM26.0620.1018.0917.8417.7417.7319.00
Analytic-DDIM25.9819.2317.7317.4917.4417.5718.98
+ +# 6.2 SAMPLE QUALITY + +As for the sample quality, we consider the commonly used FID score (Heusel et al., 2017), where a lower value indicates a better sample quality. As shown in Table 2, under trajectories of different $K$ , our Analytic-DDIM consistently improves the sample quality of the original DDIM. This allows us to generate high-quality samples with less than 50 timesteps, which results in a $20 \times$ to $80 \times$ speed up compared to the full timesteps. Indeed, in most cases, Analytic-DDIM only requires up to 50 timesteps to get a similar performance to the baselines. Besides, Analytic-DDPM also improves the sample quality of the original DDPM in most cases. For fairness, we use the ET implementation in Nichol & Dhariwal (2021) for all results in Table 2. We also report the results on CelebA 64x64 using a slightly different implementation of the ET following Song et al. (2020a) in Appendix G.7, and our Analytic-DPM is still effective. We show generated samples in Appendix G.9. + +We observe that Analytic-DDPM does not always outperform the baseline under the FID metric, which is inconsistent with the likelihood results in Table 1. Such a behavior essentially roots in the different natures of the two metrics and has been investigated in extensive prior works (Theis et al., 2015; Ho et al., 2020; Nichol & Dhariwal, 2021; Song et al., 2021; Vahdat et al., 2021; Watson et al., 2021; Kingma et al., 2021). Similarly, using more timesteps doesn't necessarily yield a better FID. For instance, see the Analytic-DDPM results on CIFAR10 (LS) and the DDIM results on ImageNet 64x64 in Table 2. A similar phenomenon is observed in Figure 8 in Nichol & Dhariwal (2021). Moreover, a DPM (including Analytic-DDPM) with OT does not necessarily lead to a better FID score (Watson et al., 2021) (see Appendix G.5 for a comparison of ET and OT in Analytic-DDPM). + +Table 3: Efficiency comparison, based on the least number of timesteps $\downarrow$ required to achieve a FID around 6 (with the corresponding FID). To get the strongest baseline, the results with ${}^{ \dagger }$ are achieved by using the quadratic trajectory Song et al. (2020a) instead of the default even trajectory. + +
MethodCIFAR10CelebA 64x64LSUN Bedroom
DDPM (Ho et al., 2020)†90 (6.12)>200130 (6.06)
DDIM (Song et al., 2020a)†30 (5.85)>100Best FID >6
Improved DDPM (Nichol & Dhariwal, 2021)45 (5.96)Missing model90 (6.02)
Analytic-DPM (ours)25 (5.81)55 (5.98)100 (6.05)
+ +We summarize the efficiency of different methods in Table 3, where we consider the least number of timesteps required to achieve a FID around 6 as the metric for a more direct comparison. + +# 7 RELATED WORK + +DPMs and their applications. The diffusion probabilistic model (DPM) is initially introduced by Sohl-Dickstein et al. (2015), where the DPM is trained by optimizing the variational bound $L_{\mathrm{vb}}$ . Ho et al. (2020) propose the new parameterization of DPMs in Eq. (3) and learn DPMs with the reweighted variant of $L_{\mathrm{vb}}$ in Eq. (5). Song et al. (2020b) model the noise adding forward process as a stochastic differential equation (SDE) and introduce DPMs with continuous timesteps. With these important improvements, DPMs show great potential in various applications, including speech synthesis (Chen et al., 2020; Kong et al., 2020; Popov et al., 2021; Lam et al., 2021), controllable generation (Choi et al., 2021; Sinha et al., 2021), image super-resolution (Saharia et al., 2021; Li et al., 2021), image-to-image translation (Sasaki et al., 2021), shape generation (Zhou et al., 2021) and time series forecasting (Rasul et al., 2021). + +Faster DPMs. Several works attempt to find short trajectories while maintaining the DPM performance. Chen et al. (2020) find an effective trajectory of only six timesteps by the grid search. However, the grid search is only applicable to very short trajectories due to its exponentially growing time complexity. Watson et al. (2021) model the trajectory searching as a least-cost-path problem and introduce a dynamic programming (DP) algorithm to solve this problem. Our work uses this DP algorithm, where the cost is defined as a term of the optimal KL divergence. In addition to these trajectory searching techniques, Luhman & Luhman (2021) compress the reverse denoising process into a single step model; San-Roman et al. (2021) dynamically adjust the trajectory during inference. Both of them need extra training after getting a pretrained DPM. As for DPMs with continuous timesteps (Song et al., 2020b), Song et al. (2020b) introduce an ordinary differential equation (ODE), which improves sampling efficiency and enables exact likelihood computation. However, the likelihood computation involves a stochastic trace estimator, which requires a multiple number of runs for accurate computation. Jolicoeur-Martineau et al. (2021) introduce an advanced SDE solver to simulate the reverse process in a more efficient way. However, the log-likelihood computation based on this solver is not specified. + +Variance Learning in DPMs. In addition to the reverse variance, there are also works on learning the forward noise schedule (i.e., the forward variance). Kingma et al. (2021) propose variational diffusion models (VDMs) on continuous timesteps, which use a signal-to-noise ratio function to parameterize the forward variance and directly optimize the variational bound objective for a better log-likelihood. While we primarily apply our method to DDPMs and DDIMs, estimating the optimal reverse variance can also be applied to VDMs (see Appendix E). + +# 8 CONCLUSION + +We present that both the optimal reverse variance and the corresponding optimal KL divergence of a DPM have analytic forms w.r.t. its score function. Building upon it, we propose Analytic-DPM, a training-free inference framework that estimates the analytic forms of the variance and KL divergence using the Monte Carlo method and a pretrained score-based model. We derive bounds of the optimal variance to correct potential bias and reveal a relationship between the score function and the data covariance matrix. Empirically, our analytic-DPM improves both the efficiency and performance of likelihood results, and generates high-quality samples efficiently in various DPMs. + +# ACKNOWLEDGMENTS + +This work was supported by NSF of China Projects (Nos. 62061136001, 61620106010, 62076145), Beijing NSF Project (No. JQ19016), Beijing Outstanding Young Scientist Program NO. BJWZYJH012019100020098, Beijing Academy of Artificial Intelligence (BAAI), Tsinghua-Huawei Joint Research Program, a grant from Tsinghua Institute for Guo Qiang, and the NVIDIA NVAIL Program with GPU/DGX Acceleration, Major Innovation & Planning Interdisciplinary Platform for the "Double-First Class" Initiative, Renmin University of China. + +# ETHICS STATEMENT + +This work proposes an analytic estimate of the optimal variance in the reverse process of diffusion probabilistic models. As a fundamental research in machine learning, the negative consequences are not obvious. Though in theory any technique can be misused, it is not likely to happen at the current stage. + +# REPRODUCIBILITY STATEMENT + +We provide our codes and links to pretrained models in https://github.com/baofff/Analytic-DPM. We provide details of these pretrained models in Appendix F.1. We provide details of data processing, log-likelihood evaluation, sampling and FID computation in Appendix F.2. We provide complete proofs of all theoretical results in Appendix A. + +# REFERENCES + +Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018. +Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. +Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan. Wavegrad: Estimating gradients for waveform generation. arXiv preprint arXiv:2009.00713, 2020. +Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. arXiv preprint arXiv:2108.02938, 2021. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009. +Prafulla Dhariwal and Alex Nichol. Diffusion models beat gans on image synthesis. arXiv preprint arXiv:2105.05233, 2021. +Yilun Du and Igor Mordatch. Implicit generation and generalization in energy-based models. arXiv preprint arXiv:1903.08689, 2019. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014. +Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. +Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. arXiv preprint arXiv:2006.11239, 2020. + +Alexia Jolicoeur-Martineau, Ke Li, Rémi Piché-Taillefer, Tal Kachman, and Ioannis Mitliagkas. Gotta go fast when generating data with score-based models. arXiv preprint arXiv:2105.14080, 2021. +Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. arXiv preprint arXiv:2006.06676, 2020a. +Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8110-8119, 2020b. +Hyun-Chul Kim and Zoubin Ghahramani. Bayesian gaussian process classification with the em-ep algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(12):1948-1959, 2006. +Diederik P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. arXiv preprint arXiv:1807.03039, 2018. +Diederik P Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. arXiv preprint arXiv:2107.00630, 2021. +Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatile diffusion model for audio synthesis. arXiv preprint arXiv:2009.09761, 2020. +Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. +Max WY Lam, Jun Wang, Rongjie Huang, Dan Su, and Dong Yu. Bilateral denoising diffusion models. arXiv preprint arXiv:2108.11514, 2021. +Haoying Li, Yifan Yang, Meng Chang, Huajun Feng, Zhihai Xu, Qi Li, and Yueting Chen. Srdiff: Single image super-resolution with diffusion probabilistic models. arXiv preprint arXiv:2104.14951, 2021. +Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pp. 3730-3738. IEEE Computer Society, 2015. +Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. +Eric Luhman and Troy Luhman. Knowledge distillation in iterative generative models for improved sampling speed. arXiv preprint arXiv:2101.02388, 2021. +Thomas P Minka. Expectation propagation for approximate bayesian inference. arXiv preprint arXiv:1301.2294, 2013. +Thomas Peter Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, Massachusetts Institute of Technology, 2001. +Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018. +Alex Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. arXiv preprint arXiv:2102.09672, 2021. +Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnama Sadekova, and Mikhail Kudinov. Grad-tts: A diffusion probabilistic model for text-to-speech. arXiv preprint arXiv:2105.06337, 2021. +Kashif Rasul, Calvin Seward, Ingmar Schuster, and Roland Vollgraf. Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting. arXiv preprint arXiv:2101.12072, 2021. + +Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International conference on machine learning, pp. 1278-1286. PMLR, 2014. +Chitwan Sahara, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. arXiv preprint arXiv:2104.07636, 2021. +Robin San-Roman, Eliya Nachmani, and Lior Wolf. Noise estimation for generative diffusion models. arXiv preprint arXiv:2104.02600, 2021. +Hiroshi Sasaki, Chris G Willcocks, and Toby P Breckon. Unit-ddpm: Unpaired image translation with denoising diffusion probabilistic models. arXiv preprint arXiv:2104.05358, 2021. +Abhishek Sinha, Jiaming Song, Chenlin Meng, and Stefano Ermon. D2c: Diffusion-denoising models for few-shot conditional generation. arXiv preprint arXiv:2106.06819, 2021. +Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pp. 2256-2265. PMLR, 2015. +Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020a. +Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. arXiv preprint arXiv:1907.05600, 2019. +Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020b. +Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum likelihood training of score-based diffusion models. arXiv e-prints, pp. arXiv-2101, 2021. +Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015. +Arash Vahdat and Jan Kautz. Nvae: A deep hierarchical variational autoencoder. arXiv preprint arXiv:2007.03898, 2020. +Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based generative modeling in latent space. arXiv preprint arXiv:2106.05931, 2021. +Daniel Watson, Jonathan Ho, Mohammad Norouzi, and William Chan. Learning to efficiently sample from diffusion probabilistic models. arXiv preprint arXiv:2106.03802, 2021. +Yan Wu, Jeff Donahue, David Balduzzi, Karen Simonyan, and Timothy Lillicrap. Logan: Latent optimisation for generative adversarial networks. arXiv preprint arXiv:1912.00953, 2019. +Zhisheng Xiao, Karsten Kreis, Jan Kautz, and Arash Vahdat. Vaebm: A symbiosis between variational autoencoders and energy-based models. In International Conference on Learning Representations, 2020. +Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. +Linqi Zhou, Yilun Du, and Jiajun Wu. 3d shape generation and completion through point-voxel diffusion. arXiv preprint arXiv:2104.03670, 2021. + +# A PROOFS AND DERIVATIONS + +# A.1 LEMMAS + +Lemma 1. (Cross-entropy to Gaussian) Suppose $q(\pmb{x})$ is a probability density function with mean $\pmb{\mu}_q$ and covariance matrix $\pmb{\Sigma}_q$ and $p(\pmb{x}) = \mathcal{N}(\pmb{x}|\pmb{\mu},\pmb{\Sigma})$ is a Gaussian distribution, then the cross-entropy between $q$ and $p$ is equal to the cross-entropy between $\mathcal{N}(\pmb{x}|\pmb{\mu}_q,\pmb{\Sigma}_q)$ and $p$ , i.e., + +$$ +\begin{array}{l} H (q, p) = H (\mathcal {N} (\boldsymbol {x} | \boldsymbol {\mu} _ {q}, \boldsymbol {\Sigma} _ {q}), p) \\ = \frac {1}{2} \log ((2 \pi) ^ {d} | \boldsymbol {\Sigma} |) + \frac {1}{2} \operatorname {t r} (\boldsymbol {\Sigma} _ {q} \boldsymbol {\Sigma} ^ {- 1}) + \frac {1}{2} (\boldsymbol {\mu} _ {q} - \boldsymbol {\mu}) ^ {\top} \boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {\mu} _ {q} - \boldsymbol {\mu}). \\ \end{array} +$$ + +Proof. + +$$ +\begin{array}{l} H (q, p) = - \mathbb {E} _ {q (\boldsymbol {x})} \log p (\boldsymbol {x}) = - \mathbb {E} _ {q (\boldsymbol {x})} \log \frac {1}{\sqrt {(2 \pi) ^ {d} | \boldsymbol {\Sigma} |}} \exp \left(- \frac {(\boldsymbol {x} - \boldsymbol {\mu}) ^ {\top} \boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {x} - \boldsymbol {\mu})}{2}\right) \\ = \frac {1}{2} \log ((2 \pi) ^ {d} | \boldsymbol {\Sigma} |) + \frac {1}{2} \mathbb {E} _ {q (\boldsymbol {x})} (\boldsymbol {x} - \boldsymbol {\mu}) ^ {\top} \boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {x} - \boldsymbol {\mu}) \\ = \frac {1}{2} \log ((2 \pi) ^ {d} | \boldsymbol {\Sigma} |) + \frac {1}{2} \mathbb {E} _ {q (\boldsymbol {x})} \operatorname {t r} ((\boldsymbol {x} - \boldsymbol {\mu}) (\boldsymbol {x} - \boldsymbol {\mu}) ^ {\top} \boldsymbol {\Sigma} ^ {- 1}) \\ = \frac {1}{2} \log ((2 \pi) ^ {d} | \boldsymbol {\Sigma} |) + \frac {1}{2} \operatorname {t r} \left(\mathbb {E} _ {q (\boldsymbol {x})} \left[ (\boldsymbol {x} - \boldsymbol {\mu}) (\boldsymbol {x} - \boldsymbol {\mu}) ^ {\top} \right] \boldsymbol {\Sigma} ^ {- 1}\right) \\ = \frac {1}{2} \log ((2 \pi) ^ {d} | \boldsymbol {\Sigma} |) + \frac {1}{2} \operatorname {t r} (\mathbb {E} _ {q (\boldsymbol {x})} \left[ (\boldsymbol {x} - \boldsymbol {\mu} _ {q}) (\boldsymbol {x} - \boldsymbol {\mu} _ {q}) ^ {\top} + (\boldsymbol {\mu} _ {q} - \boldsymbol {\mu}) (\boldsymbol {\mu} _ {q} - \boldsymbol {\mu}) ^ {\top} \right] \boldsymbol {\Sigma} ^ {- 1}) \\ = \frac {1}{2} \log ((2 \pi) ^ {d} | \boldsymbol {\Sigma} |) + \frac {1}{2} \operatorname {t r} \left(\left[ \boldsymbol {\Sigma} _ {q} + (\boldsymbol {\mu} _ {q} - \boldsymbol {\mu}) (\boldsymbol {\mu} _ {q} - \boldsymbol {\mu}) ^ {\top} \right] \boldsymbol {\Sigma} ^ {- 1}\right) \\ = \frac {1}{2} \log ((2 \pi) ^ {d} | \boldsymbol {\Sigma} |) + \frac {1}{2} \operatorname {t r} (\boldsymbol {\Sigma} _ {q} \boldsymbol {\Sigma} ^ {- 1}) + \frac {1}{2} \operatorname {t r} ((\boldsymbol {\mu} _ {q} - \boldsymbol {\mu}) (\boldsymbol {\mu} _ {q} - \boldsymbol {\mu}) ^ {\top} \boldsymbol {\Sigma} ^ {- 1}) \\ = \frac {1}{2} \log ((2 \pi) ^ {d} | \boldsymbol {\Sigma} |) + \frac {1}{2} \operatorname {t r} (\boldsymbol {\Sigma} _ {q} \boldsymbol {\Sigma} ^ {- 1}) + \frac {1}{2} (\boldsymbol {\mu} _ {q} - \boldsymbol {\mu}) ^ {\top} \boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {\mu} _ {q} - \boldsymbol {\mu}) \\ = H (\mathcal {N} (\boldsymbol {x} | \boldsymbol {\mu} _ {q}, \boldsymbol {\Sigma} _ {q}), p). \\ \end{array} +$$ + +![](images/c8992ade917e2bd5cd87e4339fe52f24a5e78a893005b60f41eb92ba01ffda36.jpg) + +Lemma 2. (KL to Gaussian) Suppose $q(\pmb{x})$ is a probability density function with mean $\pmb{\mu}_q$ and covariance matrix $\pmb{\Sigma}_q$ and $p(\pmb{x}) = \mathcal{N}(\pmb{x}|\pmb{\mu},\pmb{\Sigma})$ is a Gaussian distribution, then + +$$ +D _ {\mathrm {K L}} (q | | p) = D _ {\mathrm {K L}} \left(\mathcal {N} \left(\boldsymbol {x} \mid \boldsymbol {\mu} _ {q}, \boldsymbol {\Sigma} _ {q}\right) | | p\right) + H \left(\mathcal {N} \left(\boldsymbol {x} \mid \boldsymbol {\mu} _ {q}, \boldsymbol {\Sigma} _ {q}\right)\right) - H (q), +$$ + +where $H(\cdot)$ denotes the entropy of a distribution. + +Proof. According to Lemma 1, we have $H(q,p) = H(\mathcal{N}(\pmb{x}|\pmb{\mu}_q,\pmb{\Sigma}_q),p)$ . Thereby, + +$$ +\begin{array}{l} D _ {\mathrm {K L}} (q | | p) = H (q, p) - H (q) = H \left(\mathcal {N} (\boldsymbol {x} \mid \boldsymbol {\mu} _ {q}, \boldsymbol {\Sigma} _ {q}), p\right) - H (q) \\ = H \left(\mathcal {N} \left(\boldsymbol {x} \mid \boldsymbol {\mu} _ {q}, \boldsymbol {\Sigma} _ {q}\right), p\right) - H \left(\mathcal {N} \left(\boldsymbol {x} \mid \boldsymbol {\mu} _ {q}, \boldsymbol {\Sigma} _ {q}\right)\right) + H \left(\mathcal {N} \left(\boldsymbol {x} \mid \boldsymbol {\mu} _ {q}, \boldsymbol {\Sigma} _ {q}\right)\right) - H (q) \\ = D _ {\mathrm {K L}} \left(\mathcal {N} \left(\boldsymbol {x} \mid \boldsymbol {\mu} _ {q}, \boldsymbol {\Sigma} _ {q}\right) | | p\right) + H \left(\mathcal {N} \left(\boldsymbol {x} \mid \boldsymbol {\mu} _ {q}, \boldsymbol {\Sigma} _ {q}\right)\right) - H (q). \\ \end{array} +$$ + +![](images/2391e28eb5d61cd475f35f5970d27fe7003d419e1b638d8ee3fc28f45d2098a1.jpg) + +Lemma 3. (Equivalence between the forward and reverse Markov property) Suppose $q(\pmb{x}_{0:N}) = q(\pmb{x}_0)\prod_{n=1}^{N}q(\pmb{x}_n|\pmb{x}_{n-1})$ is a Markov chain, then $q$ is also a Markov chain in the reverse direction, + +$$ +i. e., q \left(\boldsymbol {x} _ {0: N}\right) = q \left(\boldsymbol {x} _ {N}\right) \prod_ {n = 1} ^ {N} q \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}\right). +$$ + +Proof. + +$$ +\begin{array}{l} q \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}, \dots , \boldsymbol {x} _ {N}\right) = \frac {q \left(\boldsymbol {x} _ {n - 1} , \boldsymbol {x} _ {n} , \cdots , \boldsymbol {x} _ {N}\right)}{q \left(\boldsymbol {x} _ {n} , \cdots , \boldsymbol {x} _ {N}\right)} \\ = \frac {q (\boldsymbol {x} _ {n - 1} , \boldsymbol {x} _ {n}) \prod_ {i = n + 1} ^ {N} q (\boldsymbol {x} _ {i} | \boldsymbol {x} _ {i - 1})}{q (\boldsymbol {x} _ {n}) \prod_ {i = n + 1} ^ {N} q (\boldsymbol {x} _ {i} | \boldsymbol {x} _ {i - 1})} = q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}). \\ \end{array} +$$ + +Thereby, $q(\pmb{x}_{0:N}) = q(\pmb{x}_N) \prod_{n=1}^{N} q(\pmb{x}_{n-1} | \pmb{x}_n)$ . + +![](images/a1e11debf500578e119e2b9a064bdb0384fb5e312590837c3a0b60f7b56016da.jpg) + +Lemma 4. (Entropy of a Markov chain) Suppose $q(\pmb{x}_{0:N})$ is a Markov chain, then + +$$ +H (q (\boldsymbol {x} _ {0: N})) = H (q (\boldsymbol {x} _ {N})) + \sum_ {n = 1} ^ {N} \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})) = H (q (\boldsymbol {x} _ {0})) + \sum_ {n = 1} ^ {N} \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {n} | \boldsymbol {x} _ {n - 1})). +$$ + +Proof. According to Lemma 3, we have + +$$ +\begin{array}{l} H (q (\boldsymbol {x} _ {0: N})) = - \mathbb {E} _ {q} \log q (\boldsymbol {x} _ {N}) \prod_ {n = 1} ^ {N} q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}) = - \mathbb {E} _ {q} \log q (\boldsymbol {x} _ {N}) - \sum_ {n = 1} ^ {N} \mathbb {E} _ {q} \log q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}) \\ = H (q (\boldsymbol {x} _ {N})) + \sum_ {n = 1} ^ {N} \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})). \\ \end{array} +$$ + +Similarly, we also have $H(q(\pmb{x}_{0:N})) = H(q(\pmb{x}_0)) + \sum_{n=1}^{N} \mathbb{E}_q H(q(\pmb{x}_n|\pmb{x}_{n-1}))$ . + +![](images/c8e69c822bc5350853ae843519a3708ac74c79bc72c2235eac41d24983345f78.jpg) + +Lemma 5. (Entropy of a DDPM forward process) Suppose $q(\pmb{x}_{0:N})$ is a Markov chain and $q(\pmb{x}_n|\pmb{x}_{n-1}) = \mathcal{N}(\pmb{x}_n|\sqrt{\alpha_n}\pmb{x}_{n-1}, \beta_n\pmb{I})$ , then + +$$ +H \left(q \left(\boldsymbol {x} _ {0: N}\right)\right) = H \left(q \left(\boldsymbol {x} _ {0}\right)\right) + \frac {d}{2} \sum_ {n = 1} ^ {N} \log \left(2 \pi e \beta_ {n}\right). +$$ + +Proof. According to Lemma 4, we have + +$$ +H \left(q \left(\boldsymbol {x} _ {0: N}\right)\right) = H \left(q \left(\boldsymbol {x} _ {0}\right)\right) + \sum_ {n = 1} ^ {N} \mathbb {E} _ {q} H \left(q \left(\boldsymbol {x} _ {n} \mid \boldsymbol {x} _ {n - 1}\right)\right) = H \left(q \left(\boldsymbol {x} _ {0}\right)\right) + \sum_ {n = 1} ^ {N} \frac {d}{2} \log \left(2 \pi e \beta_ {n}\right). +$$ + +![](images/d65e8d812dc4d95ba95180ee42794244704fe2a4de6a9f14721abc468c1939f8.jpg) + +Lemma 6. (Entropy of a conditional Markov chain) Suppose $q(\pmb{x}_{1:N}|\pmb{x}_0)$ is Markov, then + +$$ +H (q (\boldsymbol {x} _ {0: N})) = H (q (\boldsymbol {x} _ {0})) + \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {N} | \boldsymbol {x} _ {0})) + \sum_ {n = 2} ^ {N} \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}, \boldsymbol {x} _ {0})). +$$ + +Proof. According to Lemma 4, we have + +$$ +\begin{array}{l} H (q (\boldsymbol {x} _ {0: N})) = H (q (\boldsymbol {x} _ {0})) + \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {1: N} | \boldsymbol {x} _ {0})) \\ = H (q (\boldsymbol {x} _ {0})) + \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {N} | \boldsymbol {x} _ {0})) + \sum_ {n = 2} ^ {N} \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}, \boldsymbol {x} _ {0})). \\ \end{array} +$$ + +![](images/52e05d2870d52c175216745192b9dfc75befc7f2e532362bf803d6ce35de926b.jpg) + +Lemma 7. (Entropy of a generalized DDPM forward process) Suppose $q(\pmb{x}_{1:N}|\pmb{x}_0)$ is Markov, $q(\pmb{x}_N|\pmb{x}_0)$ is Gaussian with covariance $\overline{\beta}_N\pmb{I}$ and $q(\pmb{x}_{n-1}|\pmb{x}_n,\pmb{x}_0)$ is Gaussian with covariance $\lambda_n^2\pmb{I}$ , then + +$$ +H (q (\boldsymbol {x} _ {0: N})) = H (q (\boldsymbol {x} _ {0})) + \frac {d}{2} \log (2 \pi e \overline {{\beta}} _ {N}) + \frac {d}{2} \sum_ {n = 2} ^ {N} \log (2 \pi e \lambda_ {n} ^ {2}). +$$ + +Proof. Directly derived from Lemma 6. + +Lemma 8. (KL to a Markov chain) Suppose $q(\pmb{x}_{0:N})$ is a probability distribution and $p(\pmb{x}_{0:N}) = p(\pmb{x}_N)\prod_{n=1}^{N}p(\pmb{x}_{n-1}|\pmb{x}_n)$ is a Markov chain, then we have + +$$ +\mathbb {E} _ {q} D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {0: N - 1} | \boldsymbol {x} _ {N}) | | p (\boldsymbol {x} _ {0: N - 1} | \boldsymbol {x} _ {N})) = \sum_ {n = 1} ^ {N} \mathbb {E} _ {q} D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}) | | p (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})) + c, +$$ + +where $c = \sum_{n=1}^{N} \mathbb{E}_q H(q(\pmb{x}_{n-1}|\pmb{x}_n)) - \mathbb{E}_q H(q(\pmb{x}_{0:N-1}|\pmb{x}_N))$ is only related to $q$ . Particularly, if $q(\pmb{x}_{0:N})$ is also a Markov chain, then $c = 0$ . + +Proof. + +$$ +\begin{array}{l} \mathbb {E} _ {q} D _ {\mathrm {K L}} \left(q \left(\boldsymbol {x} _ {0: N - 1} \mid \boldsymbol {x} _ {N}\right) \mid p \left(\boldsymbol {x} _ {0: N - 1} \mid \boldsymbol {x} _ {N}\right)\right) = - \mathbb {E} _ {q} \log p \left(\boldsymbol {x} _ {0: N - 1} \mid \boldsymbol {x} _ {N}\right) - \mathbb {E} _ {q} H \left(q \left(\boldsymbol {x} _ {0: N - 1} \mid \boldsymbol {x} _ {N}\right)\right) \\ = - \sum_ {n = 1} ^ {N} \mathbb {E} _ {q} \log p (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}) - \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {0: N - 1} | \boldsymbol {x} _ {N})) \\ = \sum_ {n = 1} ^ {N} \mathbb {E} _ {q} D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}) | | p (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})) + \sum_ {n = 1} ^ {N} \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})) - \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {0: N - 1} | \boldsymbol {x} _ {N})). \\ \end{array} +$$ + +Let $c = \sum_{n=1}^{N} \mathbb{E}_q H(q(\pmb{x}_{n-1}|\pmb{x}_n)) - \mathbb{E}_q H(q(\pmb{x}_{0:N-1}|\pmb{x}_N))$ , then + +$$ +\mathbb {E} _ {q} D _ {\mathrm {K L}} \left(q \left(\boldsymbol {x} _ {0: N - 1} \mid \boldsymbol {x} _ {N}\right) | | p \left(\boldsymbol {x} _ {0: N - 1} \mid \boldsymbol {x} _ {N}\right)\right) = \sum_ {n = 1} ^ {N} \mathbb {E} _ {q} D _ {\mathrm {K L}} \left(q \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}\right) | | p \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}\right)\right) + c. +$$ + +If $q(\pmb{x}_{0:N})$ is also a Markov chain, according to Lemma 4, we have $c = 0$ . + +Lemma 9. (The optimal Markov reverse process with Gaussian transitions is equivalent to moment matching) Suppose $q(\pmb{x}_{0:N})$ is probability density function and $p(\pmb{x}_{0:N}) = \prod_{n=1}^{N} p(\pmb{x}_{n-1} | \pmb{x}_n) p(\pmb{x}_N)$ is a Gaussian Markov chain with $p(\pmb{x}_{n-1} | \pmb{x}_n) = \mathcal{N}(\pmb{x}_{n-1} | \pmb{\mu}_n(\pmb{x}_n), \sigma_n^2 \pmb{I})$ , then the joint KL optimization + +$$ +\min_{\{\boldsymbol{\mu}_{n},\sigma_{n}^{2}\}_{n = 1}^{N}}D_{\mathrm{KL}}(q(\boldsymbol{x}_{0:N})||p(\boldsymbol{x}_{0:N})) +$$ + +has an optimal solution + +$$ +\boldsymbol {\mu} _ {n} ^ {*} (\boldsymbol {x} _ {n}) = \mathbb {E} _ {q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {n - 1} ], \quad \sigma_ {n} ^ {* 2} = \mathbb {E} _ {q _ {n} (\boldsymbol {x} _ {n})} \frac {\operatorname {t r} (\operatorname {C o v} _ {q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {n - 1} ])}{d}, +$$ + +which match the first two moments of $q(\pmb{x}_{n-1}|\pmb{x}_n)$ . The corresponding optimal KL is + +$$ +D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {0: N}) | | p ^ {*} (\boldsymbol {x} _ {0: N})) = H (q (\boldsymbol {x} _ {N}), p (\boldsymbol {x} _ {N})) + \frac {d}{2} \sum_ {n = 1} ^ {N} \log (2 \pi e \sigma_ {n} ^ {* 2}) - H (q (\boldsymbol {x} _ {0: N})). +$$ + +Remark. Lemma 9 doesn't assume the form of $q(\pmb{x}_{0:N})$ , thereby it can be applied to more general Gaussian models, such as multi-layer VAEs with Gaussian decoders (Rezende et al., 2014; Burda et al., 2015). In this case, $q(\pmb{x}_{1:N}|\pmb{x}_0)$ is the hierarchical encoders of multi-layer VAEs. + +Proof. According to Lemma 8, we have + +$$ +D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {0: N}) | | p (\boldsymbol {x} _ {0: N})) = D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {N}) | | p (\boldsymbol {x} _ {N})) + \sum_ {n = 1} ^ {N} \mathbb {E} _ {q} D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}) | | p (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})) + c, +$$ + +where $c = \sum_{n=1}^{N} \mathbb{E}_q H(q(\pmb{x}_{n-1}|\pmb{x}_n)) - \mathbb{E}_q H(q(\pmb{x}_{0:N-1}|\pmb{x}_N))$ . + +Since $\mathbb{E}_qD_{\mathrm{KL}}(q(\pmb{x}_{n - 1}|\pmb{x}_n)||p(\pmb{x}_{n - 1}|\pmb{x}_n))$ is only related to $\pmb{\mu}_n(\cdot)$ and $\sigma_n^2$ , the joint KL optimization is decomposed into $n$ independent optimization sub-problems: + +$$ +\min _ {\boldsymbol {\mu} _ {n}, \sigma_ {n} ^ {2}} \mathbb {E} _ {q} D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}) | | p (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})), \quad 1 \leq n \leq N. +$$ + +According to Lemma 2, we have + +$$ +\begin{array}{l} \mathbb {E} _ {q} D _ {\mathrm {K L}} \left(q \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}\right) | | p \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}\right)\right) \\ = \mathbb {E} _ {q} D _ {\mathrm {K L}} \left(\mathcal {N} \left(\boldsymbol {x} _ {n - 1} \mid \mathbb {E} _ {q \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}\right)} [ \boldsymbol {x} _ {n - 1} ], \operatorname {C o v} _ {q \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}\right)} [ \boldsymbol {x} _ {n - 1} ]\right) | | p \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}\right)\right) \\ + \mathbb {E} _ {q} H (\mathcal {N} (\boldsymbol {x} _ {n - 1} | \mathbb {E} _ {q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {n - 1} ], \operatorname {C o v} _ {q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {n - 1} ])) - \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})) \\ = \mathcal {F} \left(\sigma_ {n} ^ {2}\right) + \mathcal {G} \left(\sigma_ {n} ^ {2}, \boldsymbol {\mu} _ {n}\right) + c ^ {\prime} \\ \end{array} +$$ + +where + +$$ +\mathcal {F} \left(\sigma_ {n} ^ {2}\right) = \frac {1}{2} \left(\sigma_ {n} ^ {- 2} \mathbb {E} _ {q} \operatorname {t r} \left(\operatorname {C o v} _ {q \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}\right)} \left[ \boldsymbol {x} _ {n - 1} \right]\right) + d \log \sigma_ {n} ^ {2}\right), +$$ + +$$ +\mathcal {G} (\sigma_ {n} ^ {2}, \pmb {\mu} _ {n}) = \frac {1}{2} \sigma_ {n} ^ {- 2} \mathbb {E} _ {q} | | \mathbb {E} _ {q (\pmb {x} _ {n - 1} | \pmb {x} _ {n})} [ \pmb {x} _ {n - 1} ] - \pmb {\mu} _ {n} (\pmb {x} _ {n}) | | ^ {2}, +$$ + +and $c' = \frac{d}{2} \log(2\pi) - \mathbb{E}_q H(q(\pmb{x}_{n-1}|\pmb{x}_n))$ . The optimal $\pmb{\mu}_n^*(\pmb{x}_n)$ is achieved when + +$$ +\left| \left| \mathbb {E} _ {q \left(\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n}\right)} \left[ \boldsymbol {x} _ {n - 1} \right] - \boldsymbol {\mu} _ {n} (\boldsymbol {x} _ {n}) \right| \right| ^ {2} = 0. +$$ + +Thereby, $\pmb{\mu}_n^*(\pmb{x}_n) = \mathbb{E}_{q(\pmb{x}_{n-1}|\pmb{x}_n)}[\pmb{x}_{n-1}]$ . In this case, $\mathcal{G}(\sigma_n^2, \pmb{\mu}_n^*) = 0$ and we only need to consider $\mathcal{F}(\sigma_n^2)$ for the optimal $\sigma_n^{*2}$ . By calculating the gradient of $\mathcal{F}$ , we know that $\mathcal{F}$ gets its minimum at + +$$ +\sigma_ {n} ^ {* 2} = \mathbb {E} _ {q} \frac {\operatorname {t r} (\operatorname {C o v} _ {q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {n - 1} ])}{d}. +$$ + +In the optimal case, $\mathcal{F}(\sigma_n^{*2}) = \frac{d}{2} (1 + \log \sigma_n^{*2})$ and + +$$ +E _ {q} D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}) | | p ^ {*} (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})) = \frac {d}{2} \log (2 \pi e \sigma_ {n} ^ {* 2}) - \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})). +$$ + +As a result, + +$$ +\begin{array}{l} D _ {\mathrm {K L}} \left(q \left(\boldsymbol {x} _ {0: N}\right) | | p ^ {*} \left(\boldsymbol {x} _ {0: N}\right)\right) \\ = D _ {\mathrm {K L}} \left(q \left(\boldsymbol {x} _ {N}\right) | | p \left(\boldsymbol {x} _ {N}\right)\right) + \sum_ {n = 1} ^ {N} \frac {d}{2} \log \left(2 \pi e \sigma_ {n} ^ {* 2}\right) - \sum_ {n = 1} ^ {N} \mathbb {E} _ {q} H \left(q \left(\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}\right)\right) \\ + \sum_ {n = 1} ^ {N} \mathbb {E} _ {q} H (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})) - (H (q (\boldsymbol {x} _ {0: N})) - H (q (\boldsymbol {x} _ {N}))) \\ = H (q (\boldsymbol {x} _ {N}), p (\boldsymbol {x} _ {N})) + \sum_ {n = 1} ^ {N} \frac {d}{2} \log (2 \pi e \sigma_ {n} ^ {* 2}) - H (q (\boldsymbol {x} _ {0: N})). \\ \end{array} +$$ + +![](images/7702b101e90e5ae4bee9b8193a26c5633f6517c545cbe71e4ed14c810ef51e22.jpg) + +Lemma 10. (Marginal score function) Suppose $q(\pmb{v}, \pmb{w})$ is a probability distribution, then + +$$ +\nabla_ {\boldsymbol {w}} \log q (\boldsymbol {w}) = \mathbb {E} _ {q (\boldsymbol {v} | \boldsymbol {w})} \nabla_ {\boldsymbol {w}} \log q (\boldsymbol {w} | \boldsymbol {v}) +$$ + +Proof. According to $\mathbb{E}_{q(\boldsymbol{v}|\boldsymbol{w})}\nabla_{\boldsymbol{w}}\log q(\boldsymbol{v}|\boldsymbol{w}) = \int \nabla_{\boldsymbol{w}}q(\boldsymbol{v}|\boldsymbol{w})\mathrm{d}\boldsymbol{v} = \nabla_{\boldsymbol{w}}\int q(\boldsymbol{v}|\boldsymbol{w})\mathrm{d}\boldsymbol{v} = \mathbf{0}$ , we have + +$$ +\begin{array}{l} \nabla_ {\boldsymbol {w}} \log q (\boldsymbol {w}) = \nabla_ {\boldsymbol {w}} \log q (\boldsymbol {w}) + \mathbb {E} _ {q (\boldsymbol {v} | \boldsymbol {w})} \nabla_ {\boldsymbol {w}} \log q (\boldsymbol {v} | \boldsymbol {w}) \\ = \mathbb {E} _ {q (\boldsymbol {v} | \boldsymbol {w})} \nabla_ {\boldsymbol {w}} \log q (\boldsymbol {v}, \boldsymbol {w}) = \mathbb {E} _ {q (\boldsymbol {v} | \boldsymbol {w})} \nabla_ {\boldsymbol {w}} \log q (\boldsymbol {w} | \boldsymbol {v}). \\ \end{array} +$$ + +![](images/11f960b87f266968438356d881a393f18187cb4011f305650d686ea106ed46bf.jpg) + +Lemma 11. (Score representation of conditional expectation and covariance) Suppose $q(\pmb{v}, \pmb{w}) = q(\pmb{v})q(\pmb{w}|\pmb{v})$ , where $q(\pmb{w}|\pmb{v}) = \mathcal{N}(\pmb{w}|\sqrt{\alpha}\pmb{v}, \beta\pmb{I})$ , then + +$$ +\mathbb {E} _ {q (\boldsymbol {v} | \boldsymbol {w})} [ \boldsymbol {v} ] = \frac {1}{\sqrt {\alpha}} (\boldsymbol {w} + \beta \nabla_ {\boldsymbol {w}} \log q (\boldsymbol {w})), +$$ + +$$ +\mathbb {E} _ {q (\pmb {w})} \mathrm {C o v} _ {q (\pmb {v} | \pmb {w})} [ \pmb {v} ] = \frac {\beta}{\alpha} \left(\pmb {I} - \beta \mathbb {E} _ {q (\pmb {w})} \left[ \nabla_ {\pmb {w}} \log q (\pmb {w}) \nabla_ {\pmb {w}} \log q (\pmb {w}) ^ {\top} \right]\right), +$$ + +$$ +\mathbb {E} _ {q (\pmb {w})} \frac {\mathrm {t r} (\mathrm {C o v} _ {q (\pmb {v} | \pmb {w})} [ \pmb {v} ])}{d} = \frac {\beta}{\alpha} \left(1 - \beta \mathbb {E} _ {q (\pmb {w})} \frac {| | \nabla_ {\pmb {w}} \log q (\pmb {w}) | | ^ {2}}{d}\right). +$$ + +Proof. According to Lemma 10, we have + +$$ +\nabla_ {\pmb {w}} \log q (\pmb {w}) = \mathbb {E} _ {q (\pmb {v} | \pmb {w})} \nabla_ {\pmb {w}} \log q (\pmb {w} | \pmb {v}) = - \mathbb {E} _ {q (\pmb {v} | \pmb {w})} \frac {\pmb {w} - \sqrt {\alpha} \pmb {v}}{\beta}. +$$ + +Thereby, $\mathbb{E}_{q(\pmb{v}|\pmb{w})}[\pmb{v}] = \frac{1}{\sqrt{\alpha}} (\pmb{w} + \beta \nabla_{\pmb{w}}\log q(\pmb{w}))$ . Furthermore, we have + +$$ +\begin{array}{l} \mathbb {E} _ {q (\boldsymbol {w})} \operatorname {C o v} _ {q (\boldsymbol {v} | \boldsymbol {w})} [ \boldsymbol {v} ] = \frac {\beta^ {2}}{\alpha} \mathbb {E} _ {q (\boldsymbol {w})} \operatorname {C o v} _ {q (\boldsymbol {v} | \boldsymbol {w})} [ \frac {\boldsymbol {w} - \sqrt {\alpha} \boldsymbol {v}}{\beta} ] \\ = \frac {\beta^ {2}}{\alpha} \mathbb {E} _ {q (\boldsymbol {w})} \left(\mathbb {E} _ {q (\boldsymbol {v} | \boldsymbol {w})} \left(\frac {\boldsymbol {w} - \sqrt {\alpha} \boldsymbol {v}}{\beta}\right) \left(\frac {\boldsymbol {w} - \sqrt {\alpha} \boldsymbol {v}}{\beta}\right) ^ {\top} - \mathbb {E} _ {q (\boldsymbol {v} | \boldsymbol {w})} \left[ \frac {\boldsymbol {w} - \sqrt {\alpha} \boldsymbol {v}}{\beta} \right] \mathbb {E} _ {q (\boldsymbol {v} | \boldsymbol {w})} \left[ \frac {\boldsymbol {w} - \sqrt {\alpha} \boldsymbol {v}}{\beta} \right] ^ {\top}\right) \\ = \frac {\beta^ {2}}{\alpha} \left(\frac {1}{\beta^ {2}} \mathbb {E} _ {q (\boldsymbol {v})} \mathbb {E} _ {q (\boldsymbol {w} | \boldsymbol {v})} (\boldsymbol {w} - \sqrt {\alpha} \boldsymbol {v}) (\boldsymbol {w} - \sqrt {\alpha} \boldsymbol {v}) ^ {\top} - \mathbb {E} _ {q (\boldsymbol {w})} \nabla_ {\boldsymbol {w}} \log q (\boldsymbol {w}) \nabla_ {\boldsymbol {w}} \log q (\boldsymbol {w}) ^ {\top}\right) \\ = \frac {\beta^ {2}}{\alpha} \left(\frac {1}{\beta^ {2}} \mathbb {E} _ {q (\boldsymbol {v})} \operatorname {C o v} _ {q (\boldsymbol {w} | \boldsymbol {v})} \boldsymbol {w} - \mathbb {E} _ {q (\boldsymbol {w})} \nabla_ {\boldsymbol {w}} \log q (\boldsymbol {w}) \nabla_ {\boldsymbol {w}} \log q (\boldsymbol {w}) ^ {\top}\right) \\ = \frac {\beta^ {2}}{\alpha} \left(\frac {1}{\beta^ {2}} \mathbb {E} _ {q (\boldsymbol {v})} \beta \boldsymbol {I} - \mathbb {E} _ {q (\boldsymbol {w})} \nabla_ {\boldsymbol {w}} \log q (\boldsymbol {w}) \nabla_ {\boldsymbol {w}} \log q (\boldsymbol {w}) ^ {\top}\right) \\ = \frac {\beta^ {2}}{\alpha} \left(\frac {1}{\beta} \pmb {I} - \mathbb {E} _ {q (\pmb {w})} \nabla_ {\pmb {w}} \log q (\pmb {w}) \nabla_ {\pmb {w}} \log q (\pmb {w}) ^ {\top}\right) = \frac {\beta}{\alpha} (\pmb {I} - \beta \mathbb {E} _ {q (\pmb {w})} \nabla_ {\pmb {w}} \log q (\pmb {w}) \nabla_ {\pmb {w}} \log q (\pmb {w}) ^ {\top}). \\ \end{array} +$$ + +Taking the trace, we have + +$$ +\mathbb {E} _ {q (\pmb {w})} \frac {\mathrm {t r} (\mathrm {C o v} _ {q (\pmb {v} | \pmb {w})} [ \pmb {v} ])}{d} = \frac {\beta}{\alpha} (1 - \beta \mathbb {E} _ {q (\pmb {w})} \frac {| | \nabla_ {\pmb {w}} \log q (\pmb {w}) | | ^ {2}}{d}). +$$ + +Lemma 12. (Bounded covariance of a bounded distribution) Suppose $q(\pmb{x})$ is a bounded distribution in $[a, b]^d$ , then $\frac{\mathrm{tr}(\mathrm{Cov}_{q(\pmb{x})}[\pmb{x}])}{d} \leq \left(\frac{b - a}{2}\right)^2$ . + +Proof. + +$$ +\begin{array}{l} \frac {\mathrm {t r} (\operatorname {C o v} _ {q (\boldsymbol {x})} [ \boldsymbol {x} ])}{d} = \frac {\mathrm {t r} (\operatorname {C o v} _ {q (\boldsymbol {x})} [ \boldsymbol {x} - \frac {a + b}{2} ])}{d} = \frac {\mathbb {E} _ {q (\boldsymbol {x})} | | \boldsymbol {x} - \frac {a + b}{2} | | ^ {2} - | | \mathbb {E} \boldsymbol {x} - \frac {a + b}{2} | | ^ {2}}{d} \\ \leq \frac {\mathbb {E} _ {q (\pmb {x})} | | \pmb {x} - \frac {a + b}{2} | | ^ {2}}{d} \leq (\frac {b - a}{2}) ^ {2}. \\ \end{array} +$$ + +![](images/e8fa38ccf8a61f0c466298f9301a83d54637d6811b021c96c222d90dd3a10021.jpg) + +Lemma 13. (Convert the moments of $q(\pmb{x}_{n-1}|\pmb{x}_n)$ to moments of $q(\pmb{x}_0|\pmb{x}_n)$ ) The optimal solution $\pmb{\mu}_n^*(\pmb{x}_n)$ and $\sigma_n^{*2}$ to Eq. (4) can be represented by the first two moments of $q(\pmb{x}_0|\pmb{x}_n)$ + +$$ +\boldsymbol {\mu} _ {n} ^ {*} (\boldsymbol {x} _ {n}) = \tilde {\boldsymbol {\mu}} _ {n} (\boldsymbol {x} _ {n}, \mathbb {E} _ {q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {n})} \boldsymbol {x} _ {0}) +$$ + +$$ +\sigma_ {n} ^ {* 2} = \lambda_ {n} ^ {2} + \left(\sqrt {\overline {{\alpha}} _ {n - 1}} - \sqrt {\overline {{\beta}} _ {n - 1} - \lambda_ {n} ^ {2}} \cdot \sqrt {\frac {\overline {{\alpha}} _ {n}}{\overline {{\beta}} _ {n}}}\right) ^ {2} \mathbb {E} _ {q (\pmb {x} _ {n})} \frac {\mathrm {t r} (\mathrm {C o v} _ {q (\pmb {x} _ {0} | \pmb {x} _ {n})} [ \pmb {x} _ {0} ])}{d} +$$ + +where $q_{n}(\mathbf{x}_{n})$ is the marginal distribution of the forward process at timestep $n$ and $d$ is the dimension of the data. + +Proof. According to Lemma 9, the optimal $\pmb{\mu}_n^*$ and $\sigma_n^{*2}$ under KL minimization is + +$$ +\boldsymbol {\mu} _ {n} ^ {*} (\boldsymbol {x} _ {n}) = \mathbb {E} _ {q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {n - 1} ], \quad \sigma_ {n} ^ {* 2} = \mathbb {E} _ {q _ {n} (\boldsymbol {x} _ {n})} \frac {\mathrm {t r} (\operatorname {C o v} _ {q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {n - 1} ])}{d}. +$$ + +We further derive $\pmb{\mu}_n^*$ . Since $\tilde{\pmb{\mu}}_n(\pmb{x}_n, \pmb{x}_0)$ is linear w.r.t. $\pmb{x}_0$ , we have + +$$ +\begin{array}{l} \boldsymbol {\mu} _ {n} ^ {*} (\boldsymbol {x} _ {n}) = \mathbb {E} _ {q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {n - 1} ] = \mathbb {E} _ {q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {n})} \mathbb {E} _ {q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}, \boldsymbol {x} _ {0})} [ \boldsymbol {x} _ {n - 1} ] \\ = \mathbb {E} _ {q \left(\boldsymbol {x} _ {0} \mid \boldsymbol {x} _ {n}\right)} \tilde {\boldsymbol {\mu}} _ {n} \left(\boldsymbol {x} _ {n}, \boldsymbol {x} _ {0}\right) = \tilde {\boldsymbol {\mu}} _ {n} \left(\boldsymbol {x} _ {n}, \mathbb {E} _ {q \left(\boldsymbol {x} _ {0} \mid \boldsymbol {x} _ {n}\right)} \boldsymbol {x} _ {0}\right). \\ \end{array} +$$ + +Then we consider $\sigma_{n}^{*2}$ . According to the law of total variance, we have + +$$ +\begin{array}{l} \operatorname {C o v} _ {q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {n - 1} ] = \mathbb {E} _ {q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {n})} \operatorname {C o v} _ {q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}, \boldsymbol {x} _ {0})} [ \boldsymbol {x} _ {n - 1} ] + \operatorname {C o v} _ {q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {n})} \mathbb {E} _ {q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}, \boldsymbol {x} _ {0})} [ \boldsymbol {x} _ {n - 1} ] \\ = \lambda_ {n} ^ {2} \pmb {I} + \mathrm {C o v} _ {q (\pmb {x} _ {0} | \pmb {x} _ {n})} \tilde {\pmb {\mu}} _ {n} (\pmb {x} _ {n}, \pmb {x} _ {0}) = \lambda_ {n} ^ {2} \pmb {I} + (\sqrt {\overline {{\alpha}} _ {n - 1}} - \sqrt {\overline {{\beta}} _ {n - 1} - \lambda_ {n} ^ {2}} \cdot \sqrt {\frac {\overline {{\alpha}} _ {n}}{\overline {{\beta}} _ {n}}}) ^ {2} \mathrm {C o v} _ {q (\pmb {x} _ {0} | \pmb {x} _ {n})} [ \pmb {x} _ {0} ]. \\ \end{array} +$$ + +Thereby, + +$$ +\begin{array}{l} \sigma_ {n} ^ {* 2} = \mathbb {E} _ {q _ {n} (\boldsymbol {x} _ {n})} \frac {\operatorname {t r} \left(\operatorname {C o v} _ {q (\boldsymbol {x} _ {n - 1} \mid \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {n - 1} ]\right)}{d} \\ = \lambda_ {n} ^ {2} + (\sqrt {\overline {{\alpha}} _ {n - 1}} - \sqrt {\overline {{\beta}} _ {n - 1} - \lambda_ {n} ^ {2}} \cdot \sqrt {\frac {\overline {{\alpha}} _ {n}}{\overline {{\beta}} _ {n}}}) ^ {2} \mathbb {E} _ {q (\boldsymbol {x} _ {n})} \frac {\operatorname {t r} (\operatorname {C o v} _ {q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {0} ])}{d}. \\ \end{array} +$$ + +# A.2 PROOF OF THEOREM 1 + +Theorem 1. (Score representation of the optimal solution to Eq. (4), proof in Appendix A.2) + +The optimal solution $\pmb{\mu}_n^*(\pmb{x}_n)$ and $\sigma_n^{*2}$ to Eq. (4) are + +$$ +\boldsymbol {\mu} _ {n} ^ {*} (\boldsymbol {x} _ {n}) = \tilde {\boldsymbol {\mu}} _ {n} \left(\boldsymbol {x} _ {n}, \frac {1}{\sqrt {\bar {\alpha} _ {n}}} \left(\boldsymbol {x} _ {n} + \bar {\beta} _ {n} \nabla_ {\boldsymbol {x} _ {n}} \log q _ {n} (\boldsymbol {x} _ {n})\right)\right), \tag {6} +$$ + +$$ +\sigma_ {n} ^ {* 2} = \lambda_ {n} ^ {2} + \left(\sqrt {\frac {\bar {\beta} _ {n}}{\alpha_ {n}}} - \sqrt {\bar {\beta} _ {n - 1} - \lambda_ {n} ^ {2}}\right) ^ {2} \left(1 - \bar {\beta} _ {n} \mathbb {E} _ {q _ {n} (\boldsymbol {x} _ {n})} \frac {\left| \left| \nabla_ {\boldsymbol {x} _ {n}} \log q _ {n} (\boldsymbol {x} _ {n}) \right| \right| ^ {2}}{d}\right), \tag {7} +$$ + +where $q_{n}(\pmb{x}_{n})$ is the marginal distribution of the forward process at the timestep $n$ and $d$ is the dimension of the data. + +Proof. According to Lemma 11 and Lemma 13, we have + +$$ +\boldsymbol {\mu} _ {n} ^ {*} (\boldsymbol {x} _ {n}) = \tilde {\boldsymbol {\mu}} _ {n} (\boldsymbol {x} _ {n}, \mathbb {E} _ {q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {n})} \boldsymbol {x} _ {0}) = \tilde {\boldsymbol {\mu}} _ {n} (\boldsymbol {x} _ {n}, \frac {1}{\sqrt {\overline {{\alpha_ {n}}}}} (\boldsymbol {x} _ {n} + \overline {{\beta}} _ {n} \nabla_ {\boldsymbol {x} _ {n}} \log q (\boldsymbol {x} _ {n}))), +$$ + +and + +$$ +\begin{array}{l} \sigma_ {n} ^ {* 2} = \lambda_ {n} ^ {2} + (\sqrt {\overline {{\alpha}} _ {n - 1}} - \sqrt {\overline {{\beta}} _ {n - 1} - \lambda_ {n} ^ {2}} \cdot \sqrt {\frac {\overline {{\alpha}} _ {n}}{\overline {{\beta}} _ {n}}}) ^ {2} \mathbb {E} _ {q (\pmb {x} _ {n})} \frac {\mathrm {t r} (\operatorname {C o v} _ {q (\pmb {x} _ {0} | \pmb {x} _ {n})} [ \pmb {x} _ {0} ]))}{d} \\ = \lambda_ {n} ^ {2} + (\sqrt {\overline {{\alpha}} _ {n - 1}} - \sqrt {\overline {{\beta}} _ {n - 1} - \lambda_ {n} ^ {2}} \cdot \sqrt {\frac {\overline {{\alpha}} _ {n}}{\overline {{\beta}} _ {n}}}) ^ {2} \frac {\overline {{\beta}} _ {n}}{\overline {{\alpha}} _ {n}} (1 - \overline {{\beta}} _ {n} \mathbb {E} _ {q (\boldsymbol {x} _ {n})} \frac {| | \nabla_ {\boldsymbol {x} _ {n}} \log q (\boldsymbol {x} _ {n}) | | ^ {2}}{d}) \\ = \lambda_ {n} ^ {2} + (\sqrt {\frac {\overline {{\beta}} _ {n}}{\alpha_ {n}}} - \sqrt {\overline {{\beta}} _ {n - 1} - \lambda_ {n} ^ {2}}) ^ {2} (1 - \overline {{\beta}} _ {n} \mathbb {E} _ {q (\boldsymbol {x} _ {n})} \frac {| | \nabla_ {\boldsymbol {x} _ {n}} \log q (\boldsymbol {x} _ {n}) | | ^ {2}}{d}). \\ \end{array} +$$ + +![](images/262c8d960706ecb6370e1422fe5146caf7ecbdaf89a6ec3aa23e1af04fae7775.jpg) + +# A.3 PROOF OF THEOREM 2 + +Theorem 2. (Bounds of the optimal reverse variance, proof in Appendix A.3) + +$\sigma_{n}^{*2}$ has the following lower and upper bounds: + +$$ +\lambda_ {n} ^ {2} \leq \sigma_ {n} ^ {* 2} \leq \lambda_ {n} ^ {2} + \left(\sqrt {\frac {\bar {\beta} _ {n}}{\alpha_ {n}}} - \sqrt {\bar {\beta} _ {n - 1} - \lambda_ {n} ^ {2}}\right) ^ {2}. \tag {11} +$$ + +If we further assume $q(\pmb{x}_0)$ is a bounded distribution in $[a, b]^d$ , where $d$ is the dimension of data, then $\sigma_n^{*2}$ can be further upper bounded by + +$$ +\sigma_ {n} ^ {* 2} \leq \lambda_ {n} ^ {2} + \left(\sqrt {\bar {\alpha} _ {n - 1}} - \sqrt {\bar {\beta} _ {n - 1} - \lambda_ {n} ^ {2}} \cdot \sqrt {\frac {\bar {\alpha} _ {n}}{\bar {\beta} _ {n}}}\right) ^ {2} \left(\frac {b - a}{2}\right) ^ {2}. \tag {12} +$$ + +Proof. According to Lemma 13 and Theorem 1, we have + +$$ +\lambda_ {n} ^ {2} \leq \sigma_ {n} ^ {* 2} \leq \lambda_ {n} ^ {2} + (\sqrt {\frac {\overline {{\beta}} _ {n}}{\alpha_ {n}}} - \sqrt {\overline {{\beta}} _ {n - 1} - \lambda_ {n} ^ {2}}) ^ {2}. +$$ + +If we further $q(\pmb{x}_0)$ assume is a bounded distribution in $[a, b]^d$ , then $q(\pmb{x}_0 | \pmb{x}_n)$ is also a bounded distribution in $[a, b]^d$ . According to Lemma 12, we have + +$$ +\mathbb {E} _ {q \left(\boldsymbol {x} _ {n}\right)} \frac {\operatorname {t r} \left(\operatorname {C o v} _ {q \left(\boldsymbol {x} _ {0} \mid \boldsymbol {x} _ {n}\right)} \left[ \boldsymbol {x} _ {0} \right]\right)}{d} \leq \left(\frac {b - a}{2}\right) ^ {2}. +$$ + +Combining with Lemma 13, we have + +$$ +\begin{array}{l} \sigma_ {n} ^ {* 2} = \lambda_ {n} ^ {2} + (\sqrt {\overline {{\alpha}} _ {n - 1}} - \sqrt {\overline {{\beta}} _ {n - 1} - \lambda_ {n} ^ {2}} \cdot \sqrt {\frac {\overline {{\alpha}} _ {n}}{\overline {{\beta}} _ {n}}}) ^ {2} \mathbb {E} _ {q (\pmb {x} _ {n})} \frac {\mathrm {t r} (\operatorname {C o v} _ {q (\pmb {x} _ {0} | \pmb {x} _ {n})} [ \pmb {x} _ {0} ]))}{d} \\ \leq \lambda_ {n} ^ {2} + (\sqrt {\overline {{\alpha}} _ {n - 1}} - \sqrt {\overline {{\beta}} _ {n - 1} - \lambda_ {n} ^ {2}} \cdot \sqrt {\frac {\overline {{\alpha}} _ {n}}{\overline {{\beta}} _ {n}}}) ^ {2} (\frac {b - a}{2}) ^ {2}. \\ \end{array} +$$ + +![](images/e148bf4fb1943ece3430f3ff2565f8b1e676fc217b4732ccfb47f81b2513d91c.jpg) + +# A.4 PROOF OF THE DECOMPOSED OPTIMAL KL + +Theorem 3. (Decomposed optimal KL, proof in Appendix A.4) + +The KL divergence between the shorter forward process and its optimal reverse process is + +$$ +D _ {\mathrm {K L}} \left(q \left(\boldsymbol {x} _ {0}, \boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}}\right) | | p ^ {*} \left(\boldsymbol {x} _ {0}, \boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}}\right)\right) = \frac {d}{2} \sum_ {k = 2} ^ {K} J \left(\tau_ {k - 1}, \tau_ {k}\right) + c, +$$ + +where $J(\tau_{k-1}, \tau_k) = \log \frac{\sigma_{\tau_{k-1}}^{*2}|\tau_k}{\lambda_{\tau_{k-1}}^2|\tau_k}$ and $c$ is a constant unrelated to the trajectory $\tau$ . + +Proof. According to Lemma 7 and Lemma 9, we have + +$$ +\begin{array}{l} D _ {\mathrm {K L}} \left(q \left(\boldsymbol {x} _ {0}, \boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}}\right) \| p ^ {*} \left(\boldsymbol {x} _ {0}, \boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}}\right)\right) \\ = \mathbb {E} _ {q} D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}}) | | p ^ {*} (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {1})) + D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}}) | | p ^ {*} (\boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}})) \\ = \mathbb {E} _ {q} D _ {\mathrm {K L}} \left(q \left(\boldsymbol {x} _ {0} \mid \boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}}\right) \mid \mid p ^ {*} \left(\boldsymbol {x} _ {0} \mid \boldsymbol {x} _ {1}\right)\right) + H \left(q \left(\boldsymbol {x} _ {N}\right), p \left(\boldsymbol {x} _ {N}\right)\right) \\ + \frac {d}{2} \sum_ {k = 2} ^ {K} \log \left(2 \pi e \sigma_ {\tau_ {k - 1} | \tau_ {k}} ^ {* 2}\right) - H \left(q \left(\boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {N}}\right)\right) \\ = - \mathbb {E} _ {q} \log p ^ {*} (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {1}) + H (q (\boldsymbol {x} _ {N}), p (\boldsymbol {x} _ {N})) + \frac {d}{2} \sum_ {k = 2} ^ {K} \log \left(2 \pi e \sigma_ {\tau_ {k - 1} | \tau_ {k}} ^ {* 2}\right) - H \left(q (\boldsymbol {x} _ {0}, \boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}})\right) \\ = - \mathbb {E} _ {q} \log p ^ {*} (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {1}) + H (q (\boldsymbol {x} _ {N}), p (\boldsymbol {x} _ {N})) + \frac {d}{2} \sum_ {k = 2} ^ {K} \log \left(2 \pi e \sigma_ {\tau_ {k - 1} | \tau_ {k}} ^ {* 2}\right) \\ - H (q (\boldsymbol {x} _ {0})) - \frac {d}{2} \log (2 \pi e \overline {{\beta}} _ {N}) - \frac {d}{2} \sum_ {k = 2} ^ {K} \log (2 \pi e \lambda_ {\tau_ {k - 1} | \tau_ {k}} ^ {2}) \\ = - \mathbb {E} _ {q} \log p ^ {*} (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {1}) + H (q (\boldsymbol {x} _ {N}), p (\boldsymbol {x} _ {N})) + \frac {d}{2} \sum_ {k = 2} ^ {K} \log \frac {\sigma_ {\tau_ {k - 1} | \tau_ {k}} ^ {* 2}}{\lambda_ {\tau_ {k - 1} | \tau_ {k}} ^ {2}} - H (q (\boldsymbol {x} _ {0})) - \frac {d}{2} \log (2 \pi e \overline {{\beta}} _ {N}). \\ \end{array} +$$ + +Let $J(\tau_{k-1}, \tau_k) = \log \frac{\sigma_{\tau_{k-1}|\tau_k}^{*2}}{\lambda_{\tau_{k-1}|\tau_k}^2}$ and $c = -\mathbb{E}_q \log p^*(\pmb{x}_0|\pmb{x}_1) + H(q(\pmb{x}_N), p(\pmb{x}_N)) - H(q(\pmb{x}_0)) - \frac{d}{2} \log (2\pi e\overline{\beta}_N)$ , then $c$ is a constant unrelated to the trajectory $\tau$ and + +$$ +D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {0}, \boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}}) | | p ^ {*} (\boldsymbol {x} _ {0}, \boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}})) = \frac {d}{2} \sum_ {k = 2} ^ {K} J (\tau_ {k - 1}, \tau_ {k}) + c. +$$ + +![](images/f32b968422d20dea0ee349adf45a16ac22b3ac683d5dda9005751df215cb9e0a.jpg) + +# A.5 THE FORMAL RESULT FOR SECTION 5 AND ITS PROOF + +Here we present the formal result of the relationship between the score function and the data covariance matrix mentioned in Section 5. + +Proposition 1. (Proof in Appendix A.5) The expected conditional covariance matrix of the data distribution is determined by the score function $\nabla_{\pmb{x}_n}\log q_n(\pmb{x}_n)$ as follows: + +$$ +\mathbb {E} _ {q \left(\boldsymbol {x} _ {n}\right)} \operatorname {C o v} _ {q \left(\boldsymbol {x} _ {0} \mid \boldsymbol {x} _ {n}\right)} \left[ \boldsymbol {x} _ {0} \right] = \frac {\overline {{\beta}} _ {n}}{\overline {{\alpha}} _ {n}} \left(\boldsymbol {I} - \overline {{\beta}} _ {n} \mathbb {E} _ {q _ {n} \left(\boldsymbol {x} _ {n}\right)} \left[ \nabla_ {\boldsymbol {x} _ {n}} \log q _ {n} \left(\boldsymbol {x} _ {n}\right) \nabla_ {\boldsymbol {x} _ {n}} \log q _ {n} \left(\boldsymbol {x} _ {n}\right) ^ {\top} \right]\right), \tag {15} +$$ + +which contributes to the data covariance matrix according to the law of total variance + +$$ +\operatorname {C o v} _ {q \left(\boldsymbol {x} _ {0}\right)} [ \boldsymbol {x} _ {0} ] = \mathbb {E} _ {q \left(\boldsymbol {x} _ {n}\right)} \operatorname {C o v} _ {q \left(\boldsymbol {x} _ {0} \mid \boldsymbol {x} _ {n}\right)} [ \boldsymbol {x} _ {0} ] + \operatorname {C o v} _ {q \left(\boldsymbol {x} _ {n}\right)} \mathbb {E} _ {q \left(\boldsymbol {x} _ {0} \mid \boldsymbol {x} _ {n}\right)} [ \boldsymbol {x} _ {0} ]. \tag {16} +$$ + +Proof. Since $q(\pmb{x}_n|\pmb{x}_0) = \mathcal{N}(\pmb{x}_n|\sqrt{\overline{\alpha}_n}\pmb{x}_0, \overline{\beta}_n\pmb{I})$ , according to Lemma 11, we have + +$$ +\mathbb {E} _ {q (\pmb {x} _ {n})} \mathrm {C o v} _ {q (\pmb {x} _ {0} | \pmb {x} _ {n})} [ \pmb {x} _ {0} ] = \frac {\overline {{\beta}} _ {n}}{\overline {{\alpha}} _ {n}} (\pmb {I} - \overline {{\beta}} _ {n} \mathbb {E} _ {q _ {n} (\pmb {x} _ {n})} \nabla_ {\pmb {x} _ {n}} \log q _ {n} (\pmb {x} _ {n}) \nabla_ {\pmb {x} _ {n}} \log q _ {n} (\pmb {x} _ {n}) ^ {\top}). +$$ + +The law of total variance is a classical result in statistics. Here we prove it for completeness: + +$$ +\begin{array}{l} \mathbb {E} _ {q \left(\boldsymbol {x} _ {n}\right)} \operatorname {C o v} _ {q \left(\boldsymbol {x} _ {0} \mid \boldsymbol {x} _ {n}\right)} [ \boldsymbol {x} _ {0} ] + \operatorname {C o v} _ {q \left(\boldsymbol {x} _ {n}\right)} \mathbb {E} _ {q \left(\boldsymbol {x} _ {0} \mid \boldsymbol {x} _ {n}\right)} [ \boldsymbol {x} _ {0} ] \\ = \mathbb {E} _ {q (\boldsymbol {x} _ {n})} \left(\mathbb {E} _ {q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {n})} \boldsymbol {x} _ {0} \boldsymbol {x} _ {0} ^ {\top} - \mathbb {E} _ {q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {0} ] \mathbb {E} _ {q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {0} ] ^ {\top}\right) \\ + \mathbb {E} _ {q (\boldsymbol {x} _ {n})} \left(\mathbb {E} _ {q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {0} ] \mathbb {E} _ {q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {0} ] ^ {\top}\right) - \left(\mathbb {E} _ {q (\boldsymbol {x} _ {n})} \mathbb {E} _ {q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {0} ]\right) \left(\mathbb {E} _ {q (\boldsymbol {x} _ {n})} \mathbb {E} _ {q (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {n})} [ \boldsymbol {x} _ {0} ]\right) ^ {\top} \\ = \mathbb {E} _ {q (\boldsymbol {x} _ {0})} \boldsymbol {x} _ {0} \boldsymbol {x} _ {0} ^ {\top} - \mathbb {E} _ {q (\boldsymbol {x} _ {0})} [ \boldsymbol {x} _ {0} ] \mathbb {E} _ {q (\boldsymbol {x} _ {0})} [ \boldsymbol {x} _ {0} ] ^ {\top} = \operatorname {C o v} _ {q (\boldsymbol {x} _ {0})} [ \boldsymbol {x} _ {0} ]. \\ \end{array} +$$ + +Algorithm 1 The DP algorithm for the least-cost-path problem (Watson et al., 2021) +1: Input: Cost function $J(s,t)$ and integers $K,N$ $(1\leq K\leq N)$ +2: Output: The least-cost-trajectory $1 = \tau_{1} < \dots < \tau_{K} = N$ +3: $C\gets \{\infty \}_{1\leq k,n\leq N}$ $D\gets \{-1\}_{1\leq k,n\leq N}$ +4: $C[1,1]\gets 0$ +5: for $k = 2$ to $K$ do Calculate $C$ and $D$ +6: $C J\gets \{C[k - 1,s] + J(s,n)\}_{1\leq s\leq N,k\leq n\leq N}$ +7: $C[k,k:\gets (\min (CJ[:,k]),\min (CJ[:,k + 1]),\dots ,\min (CJ[:,N]))$ +8: $D[k,k:\gets (\arg \min (CJ[:,k]),\arg \min (CJ[:,k + 1]),\dots ,\arg \min (CJ[:,N]))$ +9: end for +10: $\tau_K = N$ +11: for $k = K$ to 2 do Calculate $\tau$ +12: $\tau_{k - 1}\gets D[k,\tau_k]$ +13: end for +14: return $\tau$ + +# B THE DP ALGORITHM FOR THE LEAST-COST-PATH PROBLEM + +Given a cost function $J(s,t)$ with $1 \leq s < t$ and $k, n \geq 1$ , we want to find a trajectory $1 = \tau_1 < \dots < \tau_k = n$ of $k$ nodes starting from 1 and terminating at $n$ , s.t., the total cost $J(\tau_1, \tau_2) + J(\tau_2, \tau_3) + \dots + J(\tau_{k-1}, \tau_k)$ is minimized. Such a problem can be solved by the DP algorithm proposed by Watson et al. (2021). Let $C[k,n]$ be the minimized cost of the optimal trajectory, and $D[k,n]$ be the $\tau_{k-1}$ of the optimal trajectory. For simplicity, we also let $J(s,t) = \infty$ for $s \geq t \geq 1$ . Then for $k = 1$ , we have $C[1,n] = \begin{cases} 0 & n = 1 \\ \infty & N \geq n > 1 \end{cases}$ and $D[1,n] = -1$ (here $\infty$ and $-1$ represent undefined values for simplicity). For $N \geq k \geq 2$ , we have + +$$ +C [ k, n ] = \left\{ \begin{array}{l l} \infty & 1 \leq n < k \\ \min _ {k - 1 \leq s \leq n - 1} C [ k - 1, s ] + J (s, n) = \min _ {1 \leq s \leq N} C [ k - 1, s ] + J (s, n) & N \geq n \geq k, \end{array} \right. +$$ + +$$ +D[k,n] = \left\{ \begin{array}{ll} - 1 & 1\leq n < k\\ \underset {k - 1\leq s\leq n - 1}{\arg \min}C[k - 1,s] + J(s,n) = \underset {1\leq s\leq N}{\arg \min}C[k - 1,s] + J(s,n) & N\geq n\geq k. \end{array} \right. +$$ + +As long as $D$ is calculated, we can get the optimal trajectory recursively by $\tau_{K} = N$ and $\tau_{k - 1} = D[k,\tau_k]$ . We summarize the algorithm in Algorithm 1. + +# C THE BOUNDS OF THE OPTIMAL REVERSE VARIANCE CONSTRAINED ON A TRAJECTORY + +In Section 4, we derive the optimal reverse variance constrained on a trajectory. Indeed, the optimal reverse variance can also be bounded similar to Theorem 2. We formalize it in Corollary 1. + +Corollary 1. (Bounds of the optimal reverse variance constrained on a trajectory) + +$\sigma_{\tau_{k - 1}|\tau_k}^{*2}$ has the following lower and upper bounds: + +$$ +\lambda_ {\tau_ {k - 1} | \tau_ {k}} ^ {2} \leq \sigma_ {\tau_ {k - 1} | \tau_ {k}} ^ {* 2} \leq \lambda_ {\tau_ {k - 1} | \tau_ {k}} ^ {2} + \left(\sqrt {\frac {\overline {{\beta}} _ {\tau_ {k}}}{\alpha_ {\tau_ {k} | \tau_ {k - 1}}}} - \sqrt {\overline {{\beta}} _ {\tau_ {k - 1}} - \lambda_ {\tau_ {k - 1} | \tau_ {k}} ^ {2}}\right) ^ {2}. +$$ + +If we further assume $q(\pmb{x}_0)$ is a bounded distribution in $[a, b]^d$ , where $d$ is the dimension of data, then $\sigma_n^{*2}$ can be further upper bounded by + +$$ +\sigma_ {\tau_ {k - 1} | \tau_ {k}} ^ {* 2} \leq \lambda_ {\tau_ {k - 1} | \tau_ {k}} ^ {2} + \left(\sqrt {\overline {{\alpha}} _ {\tau_ {k - 1}}} - \sqrt {\overline {{\beta}} _ {\tau_ {k - 1}} - \lambda_ {\tau_ {k - 1} | \tau_ {k}} ^ {2}} \cdot \sqrt {\frac {\overline {{\alpha}} _ {\tau_ {k}}}{\overline {{\beta}} _ {\tau_ {k}}}}\right) ^ {2} (\frac {b - a}{2}) ^ {2}. +$$ + +# D SIMPLIFIED RESULTS FOR THE DDPM FORWARD PROCESS + +The optimal solution $\pmb{\mu}_n^*(\pmb{x}_n)$ and $\sigma_n^{*2}$ in Theorem 1 and the bounds of $\sigma_n^{*2}$ in Theorem 2 can be directly simplified for the DDPM forward process by letting $\lambda_n^2 = \tilde{\beta}_n$ . We list the simplified results in Corollary 2 and Corollary 3. + +Corollary 2. (Simplified score representation of the optimal solution) + +When $\lambda_n^2 = \tilde{\beta}_n$ , the optimal solution $\pmb{\mu}_n^* (\pmb{x}_n)$ and $\sigma_{n}^{*2}$ to Eq. (4) are + +$$ +\boldsymbol {\mu} _ {n} ^ {*} (\boldsymbol {x} _ {n}) = \frac {1}{\sqrt {\alpha_ {n}}} \left(\boldsymbol {x} _ {n} + \beta_ {n} \nabla_ {\boldsymbol {x} _ {n}} \log q _ {n} (\boldsymbol {x} _ {n})\right), +$$ + +$$ +\sigma_ {n} ^ {* 2} = \frac {\beta_ {n}}{\alpha_ {n}} (1 - \beta_ {n} \mathbb {E} _ {q _ {n} (\boldsymbol {x} _ {n})} \frac {| | \nabla_ {\boldsymbol {x} _ {n}} \log q _ {n} (\boldsymbol {x} _ {n}) | | ^ {2}}{d}). +$$ + +Corollary 3. (Simplified bounds of the optimal reverse variance) + +When $\lambda_n^2 = \tilde{\beta}_n$ , $\sigma_n^{*2}$ has the following lower and upper bounds: + +$$ +\tilde {\beta} _ {n} \leq \sigma_ {n} ^ {* 2} \leq \frac {\beta_ {n}}{\alpha_ {n}}. +$$ + +If we further assume $q(\pmb{x}_0)$ is a bounded distribution in $[a, b]^d$ , where $d$ is the dimension of data, then $\sigma_n^{*2}$ can be further upper bounded by + +$$ +\sigma_ {n} ^ {* 2} \leq \tilde {\beta} _ {n} + \frac {\overline {{\alpha}} _ {n - 1} \beta_ {n} ^ {2}}{\overline {{\beta}} _ {n} ^ {2}} \left(\frac {b - a}{2}\right) ^ {2}. +$$ + +As for the shorter forward process defined in Eq. (13), it also includes the DDPM as a special case when $\lambda_{\tau_{k-1}|\tau_k}^2 = \tilde{\beta}_{\tau_{k-1}|\tau_k}$ , where $\tilde{\beta}_{\tau_{k-1}|\tau_k} := \frac{\overline{\beta}_{\tau_{k-1}}}{\overline{\beta}_{\tau_k}} \beta_{\tau_k|\tau_{k-1}}$ . Similar to Corollary 2, the optimal mean and variance of its reverse process can also be simplified for DDPMs by letting $\lambda_{\tau_{k-1}|\tau_k}^2 = \tilde{\beta}_{\tau_{k-1}|\tau_k}$ . Formally, the simplified optimal mean and variance are + +$$ +\boldsymbol {\mu} _ {\tau_ {k - 1} | \tau_ {k}} ^ {*} (\boldsymbol {x} _ {\tau_ {k}}) = \frac {1}{\sqrt {\alpha_ {\tau_ {k}} | \tau_ {k - 1}}} (\boldsymbol {x} _ {\tau_ {k}} + \beta_ {\tau_ {k} | \tau_ {k - 1}} \nabla_ {\boldsymbol {x} _ {\tau_ {k}}} \log q _ {\tau_ {k}} (\boldsymbol {x} _ {\tau_ {k}})), +$$ + +$$ +\sigma_ {\tau_ {k - 1} | \tau_ {k}} ^ {* 2} = \frac {\beta_ {\tau_ {k} | \tau_ {k - 1}}}{\alpha_ {\tau_ {k} | \tau_ {k - 1}}} (1 - \beta_ {\tau_ {k} | \tau_ {k - 1}} \mathbb {E} _ {q _ {\tau_ {k}} (\boldsymbol {x} _ {\tau_ {k}})} \frac {| | \nabla_ {\boldsymbol {x} _ {\tau_ {k}}} \log q _ {\tau_ {k}} (\boldsymbol {x} _ {\tau_ {k}}) | | ^ {2}}{d}). +$$ + +Besides, Theorem 3 can also be simplified for DDPMs. We list the simplified result in Corollary 4. + +Corollary 4. (Simplified decomposed optimal KL) + +When $\lambda_n^2 = \tilde{\beta}_n$ , the KL divergence between the subprocess and its optimal reverse process is + +$$ +D _ {\mathrm {K L}} \left(q \left(\boldsymbol {x} _ {0}, \boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}}\right) \| p ^ {*} \left(\boldsymbol {x} _ {0}, \boldsymbol {x} _ {\tau_ {1}}, \dots , \boldsymbol {x} _ {\tau_ {K}}\right)\right) = \frac {d}{2} \sum_ {k = 2} ^ {K} J \left(\tau_ {k - 1}, \tau_ {k}\right) + c, +$$ + +$$ +w h e r e J (\tau_ {k - 1}, \tau_ {k}) = \log (1 - \beta_ {\tau_ {k} | \tau_ {k - 1}} \mathbb {E} _ {q _ {\tau_ {k}} (\boldsymbol {x} _ {\tau_ {k}})} \frac {| | \nabla_ {\boldsymbol {x} _ {\tau_ {k}}} \log q _ {\tau_ {k}} (\boldsymbol {x} _ {\tau_ {k}}) | | ^ {2}}{d}), +$$ + +and $c$ is a constant unrelated to the trajectory $\tau$ . + +# E EXTENSION TO DIFFUSION PROCESS WITH CONTINUOUS TIMESTEPS + +Song et al. (2020b) generalizes the diffusion process to continuous timesteps by introducing a stochastic differential equation (SDE) $dz = f(t)z\mathrm{d}t + g(t)\mathrm{d}\boldsymbol{w}$ . Without loss of generality, we consider the parameterization of $f(t)$ and $g(t)$ introduced by Kingma et al. (2021) + +$$ +f (t) = \frac {1}{2} \frac {\mathrm {d} \log \overline {{\alpha}} _ {t}}{\mathrm {d} t}, \quad g (t) ^ {2} = \frac {\mathrm {d} \overline {{\beta}} _ {t}}{\mathrm {d} t} - \frac {\mathrm {d} \log \overline {{\alpha}} _ {t}}{\mathrm {d} t} \overline {{\beta}} _ {t}, +$$ + +where $\overline{\alpha}_t$ and $\overline{\beta}_t$ are scalar-valued functions satisfying some regular conditions (Kingma et al., 2021) with domain $t\in [0,1]$ . Such a parameterization induces a diffusion process on continuous timesteps $q(x_0,z_{[0,1]})$ , s.t., + +$$ +q \left(\boldsymbol {z} _ {t} \mid \boldsymbol {x} _ {0}\right) = \mathcal {N} \left(\boldsymbol {z} _ {t} \mid \sqrt {\bar {\alpha} _ {t}} \boldsymbol {x} _ {0}, \bar {\beta} _ {t} \boldsymbol {I}\right), \quad \forall t \in [ 0, 1 ], +$$ + +$$ +q (\boldsymbol {z} _ {t} | \boldsymbol {z} _ {s}) = \mathcal {N} (\boldsymbol {z} _ {t} | \sqrt {\alpha_ {t | s}} \boldsymbol {z} _ {s}, \beta_ {t | s} \boldsymbol {I}), \quad \forall 0 \leq s < t \leq 1, +$$ + +where $\alpha_{t|s} \coloneqq \overline{\alpha}_t / \overline{\alpha}_s$ and $\beta_{t|s} \coloneqq \overline{\beta}_t - \alpha_{t|s}\overline{\beta}_s$ . + +# E.1 ANALYTIC ESTIMATE OF THE OPTIMAL REVERSE VARIANCE + +Kingma et al. (2021) introduce $p(\pmb{z}_s|\pmb{z}_t) = \mathcal{N}(\pmb{z}_s|\pmb{\mu}_{s|t}(\pmb{z}_t),\sigma_{s|t}^2)$ ( $s < t$ ) to reverse from timestep $t$ to timestep $s$ , where $\sigma_{s|t}^2$ is fixed to $\frac{\overline{\beta}_s}{\overline{\beta}_t}\beta_{s|t}$ . In contrast, we show that $\sigma_{s|t}^2$ also has an optimal solution in an analytic form of the score function under the sense of KL minimization. According to Lemma 9 and Lemma 11, we have + +$$ +\boldsymbol {\mu} _ {s | t} ^ {*} (\boldsymbol {z} _ {t}) = \mathbb {E} _ {q (\boldsymbol {z} _ {s} | \boldsymbol {z} _ {t})} [ \boldsymbol {z} _ {s} ] = \frac {1}{\sqrt {\alpha_ {t | s}}} (\boldsymbol {z} _ {t} + \beta_ {t | s} \nabla_ {\boldsymbol {z} _ {t}} \log q (\boldsymbol {z} _ {t})), +$$ + +$$ +\sigma_ {s | t} ^ {* 2} = \mathbb {E} _ {q} \frac {\mathrm {t r} (\mathrm {C o v} _ {q (\pmb {z} _ {s} | \pmb {z} _ {t})} [ \pmb {z} _ {s} ])}{d} = \frac {\beta_ {t | s}}{\alpha_ {t | s}} (1 - \beta_ {t | s} \mathbb {E} _ {q (\pmb {z} _ {t})} \frac {| | \nabla_ {\pmb {z} _ {t}} \log q (\pmb {z} _ {t}) | | ^ {2}}{d}). +$$ + +Thereby, both the optimal mean and variance have a closed form expression w.r.t. the score function. In this case, we first estimate the expected mean squared norm of the score function by $\Gamma_t$ for $t\in [0,1]$ , where + +$$ +\Gamma_ {t} = \mathbb {E} _ {q (\boldsymbol {z} _ {t})} \frac {| | \boldsymbol {s} _ {t} (\boldsymbol {z} _ {t}) | | ^ {2}}{d}. +$$ + +Notice that there are infinite timesteps in $[0,1]$ . In practice, we can only choose a finite number of timesteps $0 = t_1 < \dots < t_N = 1$ and calculate $\Gamma_{t_n}$ . For a timestep $t$ between $t_{n-1}$ and $t_n$ , we can use a linear interpolation between $\Gamma_{t_{n-1}}$ and $\Gamma_{t_n}$ . Then, we can estimate $\sigma_{s|t}^{*2}$ by + +$$ +\hat {\sigma} _ {s | t} ^ {2} = \frac {\beta_ {t | s}}{\alpha_ {t | s}} (1 - \beta_ {t | s} \Gamma_ {t}). +$$ + +# E.2 ANALYTIC ESTIMATION OF THE OPTIMAL REVERSE TRAJECTORY + +Now we consider optimize the trajectory $0 = \tau_{1} < \dots < \tau_{K} = 1$ in the sense of KL minimization + +$$ +\min _ {\tau_ {1}, \dots , \tau_ {K}} D _ {\mathrm {K L}} \left(q \left(\boldsymbol {x} _ {0}, \boldsymbol {z} _ {\tau_ {1}}, \dots , \boldsymbol {z} _ {\tau_ {K}}\right) | | p ^ {*} \left(\boldsymbol {x} _ {0}, \boldsymbol {z} _ {\tau_ {1}}, \dots , \boldsymbol {z} _ {\tau_ {K}}\right)\right). +$$ + +Similar to Theorem 3, the optimal KL is + +$$ +D _ {\mathrm {K L}} \left(q \left(\boldsymbol {x} _ {0}, \boldsymbol {z} _ {\tau_ {1}}, \dots , \boldsymbol {z} _ {\tau_ {K}}\right) | | p ^ {*} \left(\boldsymbol {x} _ {0}, \boldsymbol {z} _ {\tau_ {1}}, \dots , \boldsymbol {z} _ {\tau_ {K}}\right)\right) = \frac {d}{2} \sum_ {k = 2} ^ {K} J \left(\tau_ {k - 1}, \tau_ {k}\right) + c, +$$ + +where $J(\tau_{k-1}, \tau_k) = \log(1 - \beta_{\tau_k| \tau_{k-1}} \mathbb{E}_q \frac{||\nabla_{\boldsymbol{z}_{\tau_k}} \log q(\boldsymbol{z}_{\tau_k})||^2}{d})$ and $c$ is unrelated to $\tau$ . The difference is that $J(s, t)$ is defined on a continuous range $0 \leq s < t \leq 1$ and the DP algorithm is not directly applicable. However, we can restrict $J(s, t)$ on a finite number of timesteps $0 = t_1 < \dots < t_N = 1$ . Then we can apply the DP algorithm (see Algorithm 1) to the restricted $J(s, t)$ . + +# F EXPERIMENTAL DETAILS + +# F.1 DETAILS OF SCORE-BASED MODELS + +The CelebA 64x64 pretrained score-based model is provided in the official code (https://github.com/ermongroup/ddim) of Song et al. (2020a). The LSUN Bedroom pretrained score-based model is provided in the official code (https://github.com/hojonathanho/diffusion) of Ho et al. (2020). Both of them have a total of $N = 1000$ timesteps and use the linear schedule (Ho et al., 2020) as the forward noise schedule. + +The ImageNet 64x64 pretrained score-based model is the unconditional $L_{\mathrm{hybrid}}$ model provided in the official code (https://github.com/openai/improved-diffusion) of Nichol & Dhariwal (2021). The model includes both the mean and variance networks, where the mean network is trained with Eq. (5) as the standard DDPM (Ho et al., 2020) and the variance network is trained with the $L_{\mathrm{vb}}$ objective. We only use the mean network. The model has a total of $N = 4000$ timesteps and its forward noise schedule is the cosine schedule (Nichol & Dhariwal, 2021). + +The CIFAR10 score-based models are trained by ourselves. They have a total of $N = 1000$ timesteps and are trained with the linear forward noise schedule and the cosine forward noise schedule respectively. We use the same U-Net model architecture to Nichol & Dhariwal (2021). Following Nichol & Dhariwal (2021), we train 500K iterations with a batch size of 128, use a learning rate of 0.0001 with the AdamW optimizer (Loshchilov & Hutter, 2017) and use an exponential moving average (EMA) with a rate of 0.9999. We save a checkpoint every 10K iterations and select the checkpoint according to the FID results on 1000 samples generated under the reverse variance $\sigma_{n}^{2} = \beta_{n}$ and full timesteps. + +# F.2 LOG-LIKELIHOOD AND SAMPLING + +Following Ho et al. (2020), we linearly scale the image data consisting of integers in $\{0,1,\dots ,255\}$ to $[-1,1]$ , and discretize the last reverse Markov transition $p(\pmb{x}_0|\pmb{x}_1)$ to obtain discrete log-likelihoods for image data. + +Following Ho et al. (2020), at the end of sampling, we only display the mean of $p(\pmb{x}_0|\pmb{x}_1)$ and discard the noise. This is equivalent to setting a clipping threshold of zero for the noise scale $\sigma_1$ . Inspired by this, when sampling, we also clip the noise scale $\sigma_2$ of $p(\pmb{x}_1|\pmb{x}_2)$ , such that $\mathbb{E}|\sigma_2\epsilon| \leq \frac{2}{255}y$ , where $\epsilon$ is the standard Gaussian noise and $y$ is the maximum tolerated perturbation of a channel. It improves the sample quality, especially for our analytic estimate (see Appendix G.4). We clip $\sigma_2$ for all methods compared in Table 2, and choose $y \in \{1,2\}$ according to the FID score. We find $y = 2$ works better on CIFAR10 (LS) and CelebA 64x64 with Analytic-DDPM. For other cases, we find $y = 1$ works better. + +We use the official implementation of FID to pytorch (https://github.com/mseitzer/pytorch-fid). We calculate the FID score on 50K generated samples on all datasets. Following Nichol & Dhariwal (2021), the reference distribution statistics are computed on the full training set for CIFAR10 and ImageNet 64x64. For CelebA 64x64 and LSUN Bedroom, the reference distribution statistics is computed on 50K training samples. + +# F.3 CHOICE OF THE NUMBER OF MONTE CARLO SAMPLES AND CALCULATION OF $\Gamma$ + +We use a maximal $M$ without introducing too much computation. Specifically, we set $M = 50000$ on CIFAR10, $M = 10000$ on CelebA 64x64 and ImageNet 64x64 and $M = 1000$ on LSUN Bedroom by default without a sweep. All of the samples are from the training dataset. We use the default settings of $M$ for all results in Table 1, Table 2 and Table 3. + +We only calculate $\Gamma$ in Eq. (8) once for a pretrained model, and $\Gamma$ is reused during inference under different settings (e.g., trajectories of smaller $K$ ) in Table 1, Table 2 and Table 3. + +# F.4 IMPLEMENTATION OF THE EVEN TRAJECTORY + +We follow Nichol & Dhariwal (2021) for the implementation of the even trajectory. Given the number of timesteps $K$ of a trajectory, we firstly determine the stride $a = \frac{N - 1}{K - 1}$ . Then the $k$ th timestep is determined as $\text{round}(1 + a(k - 1))$ . + +# F.5 EXPERIMENTAL DETAILS OF TABLE 3 + +In Table 3, the results of DDPM, DDIM and Analytic-DPM are based on the same score-based models (i.e., those listed in Section F.1). We get the results of Improved DDPM by running its official code and unconditional $L_{\mathrm{hybrid}}$ models (https://github.com/openai/improved-diffusion). As shown in Table 4, on the same dataset, the sizes as well as the averaged time of a single function evaluation of these models are almost the same. + +Table 4: Model size and the averaged time to run a model function evaluation with a batch size of 10 on one GeForce RTX 2080 Ti. + +
CIFAR10CelebA 64x64LSUN Bedroom
DDPM, DDIM, Analytic-DPM200.44 MB / 29 ms300.22 MB / 50 ms433.63 MB / 438 ms
Improved DDPM200.45 MB / 30 msMissing model433.64 MB / 439 ms
+ +The DDPM and DDIM results on CIFAR10 are based on the quadratic trajectory following Song et al. (2020a), which gets better FID than the even trajectory. The Analytic-DPM result is based on the DDPM forward process on LSUN Bedroom, and based on the DDIM forward process on other datasets. These choices achieve better efficiency than their alternatives. + +# G ADDITIONAL RESULTS + +# G.1 VISUALIZATION OF REVERSE VARIANCES AND VARIATIONAL BOUND TERMS + +Figure 1 visualizes the reverse variances and $L_{\mathrm{vb}}$ terms on CIFAR10 with the linear forward noise schedule (LS). In Figure 2, we show more DDPM results on CIFAR10 with the cosine forward noise schedule (CS), CelebA 64x64 and ImageNet 64x64. + +![](images/dff9427a1d0a087630008486329be47f969a264f231e7b153f523bdea4ff64b2.jpg) +(a) CIFAR10 (CS) + +![](images/add05d35fa1ae7ca904bf001510097f6f45611d2657900887107008b0d711391.jpg) +(b) CelebA 64x64 + +![](images/8981c74d4425766d6b0c886e1bfd6db9c4cf911c4ca0fb66b5fd4674fb092c6e.jpg) +(c) ImageNet 64x64 + +![](images/e92cc3ee4ec082f8d0fc832661029b14e8ddf897624f83061792185fffe4dec6.jpg) +(d) CIFAR10 (CS) + +![](images/0dac181961b454b33095d74a6a59436924a37ef3808d0a377a2c24dae857257c.jpg) +(e) CelebA 64x64 + +![](images/29b21289e7487bb7b84f3e9b663e0bb4993647a61dafc43d5094ce4e9b4ca939.jpg) +(f) ImageNet 64x64 +Figure 2: Comparing our analytic estimate $\hat{\sigma}_n^2$ and prior works with handcrafted variances $\beta_{n}$ and $\tilde{\beta}_{n}$ . (a-c) compare the values of the variance of different timesteps. (d-e) compare the term in $L_{\mathrm{vb}}$ corresponding to each timestep. The value of $L_{\mathrm{vb}}$ is the area under the corresponding curve. + +# G.2 ABLATION STUDY ON THE NUMBER OF MONTE CARLO SAMPLES + +We show that only a small number of Monte Carlo (MC) samples $M$ in Eq. (8) is enough for a small MC variance. As shown in Figure 3, the values of $\Gamma_{n}$ with $M = 100$ and $M = 50000$ Monte Carlo samples are almost the same in a single trial. To explicitly see the variance, in Figure 4 and Figure 5, we plot the mean, the standard deviation and the relative standard deviation (RSD) (i.e., the ratio of the standard deviation to the mean) of a single Monte Carlo sample $\frac{||\pmb{s}_n(\pmb{x}_n)||^2}{d}$ , $\pmb{x}_n \sim q_n(\pmb{x}_n)$ and $\Gamma_{n}$ with different $M$ respectively on CIFAR 10 (LS). In all cases, the RSD decays fast as $n$ increases. When $n$ is small (e.g., $n = 1$ ), using $M = 10$ Monte Carlo samples can ensure that the RSD of $\Gamma_{n}$ is below 0.1, and using $M = 100$ Monte Carlo samples can ensure that the RSD of $\Gamma_{n}$ is about 0.025. When $n > 100$ , the RSD of a single Monte Carlo sample is below 0.05, and using only $M = 1$ Monte Carlo sample can ensure the RSD of $\Gamma_{n}$ is below 0.05. Overall, a small $M$ like 10 and 100 is sufficient for a small Monte Carlo variance. + +Furthermore, we show that Analytic-DPM with a small $M$ like 10 and 100 has a similar performance to that with a large $M$ . As shown in Figure 6 (a), using $M = 100$ or $M = 50000$ almost does not affect the likelihood results on CIFAR10 (LS). In Table 5 (a), we show results with even smaller $M$ (e.g., $M = 1,3,10$ ). Under both the NLL and FID metrics, $M = 10$ achieves a similar result to that of $M = 50000$ . The results are similar on ImageNet 64x64, as shown in Figure 6 (b) and Table 5 (b). Notably, the expected performance of FID is almost not influenced by the choice of $M$ . + +As a result, Analytic-DPM consistently improves the baselines using a much smaller $M$ (e.g., $M = 10$ ), as shown in Table 6. + +![](images/3e14871f661f5f7a3b8d01273c5fb18b6cff57b752ccc4ee8f613f08b1520467.jpg) +(a) CIFAR10 (LS) + +![](images/3be4aa56d4d4863ad03ed39b885ef242242f0ed341e86c28255b17ca55cfee64.jpg) +(b) ImageNet 64x64 +Figure 3: The value of $\Gamma_{n}$ in a single trial with different number of Monte Carlo samples $M$ . + +![](images/2b8a8e7bf9f9a4ca2cf857e9bebde2e42f7da2100ce3229b785d0e4d1edb5558.jpg) +(a) Mean + +![](images/b13ac695a7cd3f01582a18bedd3b10e582df27f9907d56ca1c82df9fb90edb02.jpg) +(b) Standard deviation + +![](images/e92695de175a74b680f324ea8884f1b9d907920057a607e8e5f28187922ea291.jpg) +(c) Relative standard deviation +Figure 4: The mean, the standard deviation and the relative standard deviation (RSD) (i.e., the ratio of the standard deviation to the mean) of a single Monte Carlo sample $\frac{||\pmb{s}_n(\pmb{x}_n)||^2}{d}$ , $\pmb{x}_n \sim q_n(\pmb{x}_n)$ at different $n$ on CIFAR10 (LS). These values are estimated by 50000 samples. + +![](images/31a7df81142c06edf7f5e22dc87628c2691f03056a87480138ea98273255fac2.jpg) +(a) Mean +Figure 5: The mean, the standard deviation and the relative standard deviation (RSD) (i.e., the ratio of the standard deviation to the mean) of $\Gamma_{n}$ with different number of Monte Carlo samples $M$ at different $n$ on CIFAR10 (LS). These values are directly calculated from the mean, the standard deviation and the RSD of $\frac{||s_n(\pmb{x}_n)||^2}{d}$ , $\pmb{x}_n\sim q_n(\pmb{x}_n)$ presented in Figure 4. + +![](images/60f0dc4af252ae89fdf2ad06ea6b602186b742bc83afec39ff1aef3d58558602.jpg) +(b) Standard deviation + +![](images/3908bf806c18c1064ef46c4c191ee3d8b24239fe62e6c7e7222dac8a2aa364ec.jpg) +(c) Relative standard deviation + +![](images/de99ee24a7a6638ad23757dbb9ccbebc88172e866747ea4724a0c3dc180d08b1.jpg) +(a) CIFAR10 (LS) + +![](images/a38a6b77871f4abe6dde98ce14d61102a1e4a652175456bf6c07207066eaf158.jpg) +(b) ImageNet 64x64 +Figure 6: The curves of NLL v.s. the number of timesteps $K$ in a trajectory with different number of Monte Carlo samples $M$ , evaluated under $\sigma_n^2 = \hat{\sigma}_n^2$ and the even trajectory. + +Table 5: The negative log-likelihood (NLL) and the FID results of Analytic-DPM with different number of Monte Carlo samples $M$ . The results with $M = 1,3,10,100$ are averaged by 5 runs. All results are evaluated under the DDPM forward process and the even trajectory. We use $K = 10$ for CIFAR10 (LS) and $K = 25$ for ImageNet 64x64. +(a) CIFAR10 (LS) + +
NLL ↓FID ↓
M = 16.220±1.12634.05±4.97
M = 35.689±0.42434.29±2.88
M = 105.469±0.00533.69±2.10
M = 1005.468±0.00434.63±0.68
M = 500005.47134.26
+ +(b) ImageNet 64x64 + +
NLL ↓FID ↓
M = 14.943±0.16231.59±5.11
M = 34.821±0.05531.98±1.19
M = 104.791±0.01731.93±1.02
M = 1004.785±0.00331.93±0.69
M = 100004.78332.56
+ +Table 6: The NLL and FID comparison between Analytic-DDPM with $M = 10$ Monte Carlo samples and DDPM. Results are evaluated under the even trajectory on CIFAR10 (LS). + +
# timesteps K102550100200400
NLL ↓
Analytic-DDPM (M=10)5.474.804.384.073.853.71
DDPM6.996.115.444.864.394.07
FID ↓
Analytic-DDPM (M=10)33.6911.997.245.394.193.58
DDPM44.4521.8315.2110.948.234.86
+ +# G.3 TIGHTNESS OF THE BOUNDS + +In Section 3.1 and Appendix C, we derive upper and lower bounds of the optimal reverse variance. In this section, we show these bounds are tight numerically in practice. In Figure 7, we plot the combined upper bound (i.e., the minimum of the upper bounds in Eq. (11) and Eq. (12)) and the lower bound on CIFAR10. As shown in Figure 7 (a,c), the two bounds almost overlap under the full-timesteps $(K = N)$ trajectory. When the trajectory has a smaller number of timesteps (e.g., $K = 100$ ), the two bounds also overlap when the timestep $\tau_{k}$ is large. These results empirically validate that our bounds are tight, especially when the timestep is large. + +![](images/16bc2d5deae1acb14ee662f4bc2af0d8fbd8afae366972358d2aa3aeb5598e56.jpg) +(a) LS, $K = N$ + +![](images/9d4751f5f67c033bc164df8960c8174a31510838b0df4eea10d5c8c0ce5b9646.jpg) +(b) LS, $K = 100$ +Figure 7: The combined upper bound (UB) and the lower bound (LB) under full-timesteps $(K = N)$ and 100-timesteps $(K = 100)$ trajectories on CIFAR10 (LS) and CIFAR10 (CS). + +![](images/9208a7ab55c9db799396e3f2eb00281403d3d93387fac9bb81eead7ed23e5bbb.jpg) +(c) CS, $K = N$ + +![](images/b16fc26f4eb7754efddb20b898091821f127063bfab07bdecd11c803b18f8f4e.jpg) +(d) CS, $K = 100$ + +In Figure 8, we also plot the two upper bounds in Eq. (11) and Eq. (12) individually. The upper bound in Eq. (11) is tighter when the timestep is small and the other one is tighter when the timestep is large. Thereby, both upper bounds contribute to the combined upper bound. + +![](images/cd4bfea7cc47e6dbace6bf4e9e82e47e080a1ab467a42b67e1762d7700c20e96.jpg) +(a) LS, $K = N$ +Figure 8: The upper bounds (UB) in Eq. (11) and Eq. (12) under full-timesteps $(K = N)$ and 100-timesteps $(K = 100)$ trajectories on CIFAR10 (LS) and CIFAR10 (CS). + +![](images/9ee88bec9a72bbdbc17df95b4482b6a8424a470b935415402b136f4b7a4fcc51.jpg) +(b) LS, $K = 100$ + +![](images/e2e3f7309730cc4952d740ddd9986796826552f9c4c02f902a74d964a18ae7b1.jpg) +(c) CS, $K = N$ + +![](images/86267329290ecde3c83ef1dee488210628f90cedb5227499de58c837c2120ed8.jpg) +(d) CS, $K = 100$ + +To see how these bounds work in practice, in Figure 9, we plot the probability that $\hat{\sigma}_n^2$ is clipped by the bounds in Theorem 2 with different number of Monte Carlo samples $M$ on CIFAR10 (LS). For all $M$ , the curves of ratio v.s. $n$ are similar and the estimate is clipped more frequently when $n$ is large. This is as expected because when $n$ is large, the gap between the upper bound in Eq. (12) and the lower bound in Eq. (11) tends to zero. The results also agree with the plot of the bounds in Figure 7. Besides, the similarity of results between different $M$ implies that the clipping by bounds occurs mainly due to the error of the score-based model $s_n(x_n)$ , instead of the randomness in Monte Carlo methods. + +![](images/a7c9cfc6cd5179a785204534f4cad4eab86426d2cabc0cae52ad285c9f26a96e.jpg) +Figure 9: The probability that $\hat{\sigma}_n^2$ is clipped by the bounds in Theorem 2 with different number of Monte Carlo samples $M$ on CIFAR10 (LS). The probability is estimated by the ratio of $\hat{\sigma}_n^2$ being clipped in 100 independent trials. The results are evaluated with full timesteps $K = N$ . + +# G.4 ABLATION STUDY ON THE CLIPPING OF $\sigma_{2}$ DESIGNED FOR SAMPLING + +This section validates the argument in Appendix F.2 that properly clipping the noise scale $\sigma_{2}$ in $p(x_1|x_2)$ leads to a better sample quality. As shown in Figure 10 and Figure 11, it greatly improves the sample quality of our analytic estimate. The curves of clipping and no clipping overlap as $K$ increases, since $\sigma_{2}$ is below the threshold for a large $K$ . + +Indeed, as shown in Table 7, the clipping threshold designed for sampling in Appendix F.2 is 1 to 3 orders of magnitude smaller than the combined upper bound in Theorem 2 (i.e., the minimum of the upper bounds in Eq. (11) and Eq. (12)) when $K$ is small. + +As shown in Figure 12, clipping $\sigma_{2}$ also slightly improves the sample quality of the handcrafted reverse variance $\sigma_{n}^{2} = \beta_{n}$ used in the original DDPM (Ho et al., 2020). As for the other two variances, i.e., $\sigma_{n}^{2} = \tilde{\beta}_{n}$ in the original DDPM and $\sigma_{n}^{2} = \lambda_{n}^{2} = 0$ in the original DDIM (Song et al., 2020a), their $\sigma_{2}$ generally don't exceed the threshold and thereby clipping doesn't affect the result. + +![](images/236cb46c9c9c730cab2fbbd7291b9cd3e6bd0665e6f80aff611a0403097df415.jpg) +(a) CIFAR10 (LS) + +![](images/900e8f0607b03ad4ff972a2f14c00001211c02c762d5dea46e4c7e946298201f.jpg) +(b) CIFAR10 (CS) + +![](images/f67069650b69e630cb16fc65442e5ab66e4f50e4b51de628ab5b970681c2763f.jpg) +(c) CelebA 64x64 +Figure 10: Ablation study on clipping $\sigma_{2}$ , evaluated under Analytic-DDPM. + +![](images/a42bbd46c924672bc9fafa700fd4e3570f579f2082e30b806714defdbf92fffa.jpg) +(d) ImageNet 64x64 + +![](images/ca2bad1527d6eea323f68c10e8a0994a486592733442491dc7382378f3139c9d.jpg) +(a) CIFAR10 (LS) + +![](images/93c5407483b70ca6ab509448739cc7a3595a3df9fd702e1165d57df77446e475.jpg) +(b) CIFAR10 (CS) + +![](images/f0db59b061a5bc5c58dac92eacee9bfdf9dd446b437f5b772c32fae45fddf733.jpg) +(c) CelebA 64x64 +Figure 11: Ablation study on clipping $\sigma_{2}$ , evaluated under Analytic-DDIM. + +![](images/9b0d74eabf37e1f97bd234ea6b2fa2a7ece75243e531806e50e72899af316597.jpg) +(d) ImageNet 64x64 + +![](images/df30f2823645b9d81d96f454bd9f0330d86360d49d27cebcc62b855c19368b0a.jpg) +(a) CIFAR10 (LS) + +![](images/87bf662f351995608f312336c94e57a0d09cdfa034be340b3226f661ba371eff.jpg) +(b) CIFAR10 (CS) + +![](images/d1a960538e6b439681008fc6f23026a9b914a28543eeb5c862ef346f89687782.jpg) +(c) CelebA 64x64 + +![](images/3b33d7d578f14cab4779c09d62c5c582040680680412c29148fc15cc10d6e2f3.jpg) +(d) ImageNet 64x64 +Figure 12: Ablation study on clipping $\sigma_{2}$ , evaluated under DDPM with $\sigma_{n}^{2} = \beta_{n}$ . + +Table 7: Comparing the values of (i) the threshold in Appendix F.2 used to clip $\sigma_2^2$ designed for sampling, (ii) the combined upper bound in Theorem 2 when $n = 2$ , (iii) the lower bound in Theorem 2 when $n = 2$ and (iv) our analytic estimate $\hat{\sigma}_2^2$ . We show comparison results on different datasets and different forward processes when $K$ is small. + +
Model \ # timesteps K102550100
CIFAR10 (LS)
DDPMThreshold (y=2)3.87 × 10-43.87 × 10-43.87 × 10-43.87 × 10-4
Upper bound1.45 × 10-12.24 × 10-26.20 × 10-32.10 × 10-3
Lower bound9.99 × 10-59.96 × 10-59.84 × 10-59.55 × 10-5
σ228.70 × 10-32.99 × 10-31.32 × 10-36.54 × 10-4
DDIMThreshold (y=1)9.66 × 10-59.66 × 10-59.66 × 10-59.66 × 10-5
Upper bound1.37 × 10-11.96 × 10-24.82 × 10-31.36 × 10-3
Lower bound0000
σ228.17 × 10-32.54 × 10-39.66 × 10-43.73 × 10-4
CIFAR10 (CS)
DDPMThreshold (y=1)9.66 × 10-59.66 × 10-59.66 × 10-59.66 × 10-5
Upper bound3.56 × 10-26.15 × 10-31.85 × 10-36.80 × 10-4
Lower bound4.12 × 10-54.10 × 10-54.04 × 10-53.89 × 10-5
σ223.90 × 10-31.28 × 10-35.61 × 10-42.75 × 10-4
DDIMThreshold (y=1)9.66 × 10-59.66 × 10-59.66 × 10-59.66 × 10-5
Upper bound3.33 × 10-25.22 × 10-31.37 × 10-34.18 × 10-4
Lower bound0000
σ223.61 × 10-31.06 × 10-33.95 × 10-41.53 × 10-4
CelebA 64x64
DDPMThreshold (y=2)3.87 × 10-43.87 × 10-43.87 × 10-43.87 × 10-4
Upper bound1.45 × 10-12.24 × 10-26.20 × 10-52.10 × 10-3
Lower bound9.99 × 10-59.96 × 10-59.84 × 10-59.55 × 10-5
σ224.04 × 10-31.54 × 10-37.54 × 10-44.06 × 10-4
DDIMThreshold (y=1)9.66 × 10-59.66 × 10-59.66 × 10-59.66 × 10-5
Upper bound1.37 × 10-11.96 × 10-24.82 × 10^-31.36 × 10^-3
Lower bound0000
σ223.74 × 10-31.26 × 10-35.17 × 10^-42.11 × 10^-4
Model \ # timesteps K2550100200
ImageNet 64x64
DDPMThreshold (y=1)9.66 × 10-59.66 × 10-59.66 × 10-59.66 × 10-5
Upper bound5.93 × 10-31.84 × 10-36.44 × 10^-42.61 × 10^-4
Lower bound9.85 × 10-69.81 × 10-69.72 × 10^-69.51 × 10^-6
σ221.40 × 10-36.05 × 10^-42.77 × 10^-41.39 × 10^-4
DDIMThreshold (y=1)9.66 × 10-59.66 × 10-59.66 × 10^-59.66 × 10^-5
Upper bound5.46 × 10-31.59 × 10-35.03 × 10^-41.77 × 10^-4
Lower bound0000
σ221.28 × 10-35.17 × 10^-42.12 × 10^-49.11 × 10^-5
+ +# G.5 SAMPLE QUALITY COMPARISON BETWEEN DIFFERENT TRAJECTORIES + +While the optimal trajectory (OT) significantly improves the likelihood results, it doesn't lead to better FID results. As shown in Figure 13, the even trajectory (ET) has better FID results. Such a behavior essentially roots in the different natures of the two metrics and has been investigated in extensive prior works (Ho et al., 2020; Nichol & Dhariwal, 2021; Song et al., 2021; Vahdat et al., 2021; Watson et al., 2021; Kingma et al., 2021). + +![](images/6d6439491c8ff85f0353096813330e87bbd965e422452fe92f3990b4bc4eb48b.jpg) +(a) CIFAR10 (LS) + +![](images/8927962c6deba1144290995161ee6cc7992dbbe9d6b1f778fa2b5470a8e5ee1c.jpg) +(b) CIFAR10 (CS) + +![](images/8fae287a4f1f1e580310ee7b3e457b946ec3ebd983fb6fe835e0aa00d34c41d6.jpg) +(c) CelebA 64x64 +Figure 13: FID results with ET and OT, evaluated under Analytic-DDPM. + +![](images/aceb0388f810b11d31720c5885ba59819bcd99be3c4a6bc38e5df999ec56e012.jpg) +(d) ImageNet 64x64 + +# G.6 ADDITIONAL LIKELIHOOD COMPARISON + +We compare our Analytic-DPM to Improved DDPM (Nichol & Dhariwal, 2021) that predicts the reverse variance by a neural network. The comparison is based on the ImageNet 64x64 model described in Appendix F.1. As shown in Table 8, with full timesteps, Analytic-DPM achieves a NLL of 3.61, which is very close to 3.57 achieved by predicting the reverse variance in Improved DDPM. Besides, we also notice that the ET reduces the log-likelihood performance of Improved DDPM when $K$ is small, and this is consistent with what Nichol & Dhariwal (2021) report. In contrast, our Analytic-DPM performs well with the ET. + +Table 8: Negative log-likelihood (bits/dim) $\downarrow$ under the DDPM forward process on ImageNet 64x64. All are evaluated under the even trajectory (ET). + +
Model \ # timesteps K255010020040010004000
Improved DDPM18.918.465.274.243.863.683.57
Analytic-DDPM4.784.424.153.953.813.693.61
+ +# G.7 CELEBA 64x64 RESULTS WITH A SLIGHTLY DIFFERENT IMPLEMENTATION OF THE EVEN TRAJECTORY + +Song et al. (2020a) use a slightly different implementation of the even trajectory on CelebA 64x64. They choose a different stride $a = \operatorname{int}\left(\frac{N}{K}\right)$ , and the $k$ th timestep is determined as $1 + a(k - 1)$ . As shown in Table 9, under the setting of Song et al. (2020a) on CelebA 64x64, our Analytic-DPM still improves the original DDIM consistently and improves the original DDPM in most cases. + +# G.8 COMPARISON TO OTHER CLASSES OF GENERATIVE MODELS + +While DPMs and their variants serve as the most direct baselines to validate the effectiveness of our method, we also compare with other classes of generative models in Table 10. Analytic-DPM achieves competitive sample quality results among various generative models, and meanwhile significantly reduces the efficiency gap between DPMs and other models. + +Table 9: FID ↓ on CelebA 64x64, following the even trajectory implementation of Song et al. (2020a). †Original results in Song et al. (2020a). ‡Our reproduced results. + +
Model \ # timesteps K1020501001000
CelebA 64x64
DDPM, σn2=βn†33.1226.0318.4813.935.98
DDPM, σn2=βn‡33.1325.9518.6113.925.95
DDPM, σn2=βn†299.71183.8371.7145.203.26
DDPM, σn2=βn‡299.88185.2171.8645.153.21
Analytic-DDPM25.8817.4010.987.955.21
DDIM, σn2=λn2=0†17.3313.739.176.533.51
DDIM, σn2=λn2=0‡17.3813.729.176.513.40
Analytic-DDIM12.749.505.964.143.13
+ +Table 10: Comparison to other classes of generative models on CIFAR10. We show the FID results, the number of model function evaluations (NFE) to generate a single sample and the time to generate 10 samples with a batch size of 10 on one GeForce RTX 2080 Ti. + +
MethodFID↓NFE ↓Time (s) ↓
Analytic-DPM, K = 25 (ours)5.81250.73
DDPM, K = 90 (Ho et al., 2020)6.12902.64
DDIM, K = 30 (Song et al., 2020a)5.85300.88
Improved DDPM, K = 45 (Nichol & Dhariwal, 2021)5.96451.37
SNGAN (Miyato et al., 2018)21.71-
BigGAN (cond.) (Brock et al., 2018)14.731-
StyleGAN2 (Karras et al., 2020a)8.321-
StyleGAN2 + ADA (Karras et al., 2020a)2.921-
NVAE (Vahdat & Kautz, 2020)23.51-
Glow (Kingma & Dhariwal, 2018)48.91-
EBM (Du & Mordatch, 2019)38.260-
VAEBM (Xiao et al., 2020)12.216-
+ +# G.9 SAMPLES + +In Figure 14-17, we show Analytic-DDIM constrained on a short trajectory of $K = 50$ timesteps can generate samples comparable to these under the best FID setting. + +In Figure 18-21, we also show samples of both Analytic-DDPM and Analytic-DDIM constrained on trajectories of different number of timesteps $K$ . + +![](images/8670df0cd27e17d447846cfd3e21d3e7a32b825c802fdd15bbf1aa56c6b3f9b8.jpg) +(a) Best FID samples + +![](images/2c885935b46ebf74d08e0636da576a8825be0a132bb88e1f26b6239e285ad512.jpg) +(b) Analytic-DDIM, $K = 50$ + +![](images/69968ca4d99dc1ac9afa31e14c4cb4f5f0c76fb91f055233f52e946868b38ad9.jpg) +(a) Best FID samples +Figure 15: Generated samples on CIFAR10 (CS). + +![](images/a6cb8c69e4b7176150a0adc3ddd137396f18e13d92cca3b327cba9149e9b41b1.jpg) +Figure 14: Generated samples on CIFAR10 (LS). +(b) Analytic-DDIM, $K = 50$ + +![](images/b32a2263420501890dd8fc64898a4128dca79af6beea6172b34fd3f3808822ac.jpg) +(a) Best FID samples + +![](images/fbd3e2613f5daf8efc9ab80cf2896cdc5c315d539a4be53c517f894657dab1df.jpg) +(b) Analytic-DDIM, $K = 50$ + +![](images/635b3ae22a6f508b5e0fb0f7f6776f9a6db0aa35b36a88852829d8ffe068f345.jpg) +(a) Best FID samples +Figure 17: Generated samples on ImageNet 64x64. + +![](images/88afdf8bc724840339a3d8c01cb6706b1d6fc47cabcea59347dc4991e02c99d3.jpg) +Figure 16: Generated samples on CelebA 64x64. +(b) Analytic-DDIM, $K = 50$ + +![](images/62aa329b12d5ab83c68e0c3bedc4880ee9b8133c0e5028183470c021df03d77a.jpg) +(a) Analytic-DDPM, $K = 10$ + +![](images/85df9f3e2341b777213154405aff73c0e8bab99a51c8b1fc66abbff8c8ed81ed.jpg) +(b) Analytic-DDPM, $K = 100$ + +![](images/5b2478396f623f2ac55197bf75b8226f3fe66fbd8ca206467bfe7e7161e7bafb.jpg) +(c) Analytic-DDPM, $K = 1000$ + +![](images/d0e0992986bf06cfd5e26a6be3667b15052681a220a2ea1805742c9b6b569263.jpg) +(d) Analytic-DDIM, $K = 10$ + +![](images/5480041dac226a575df875cf9c6723f1f1505975d6eaf8bad8924b343380d264.jpg) +(e) Analytic-DDIM, $K = 100$ + +![](images/acad7e53fc58be34a45092386631a9930027432dfabf7b6912efc54e096c4fbe.jpg) +(f) Analytic-DDIM, $K = 1000$ + +![](images/ae6620525b2cdbfbba577f4261b3e7b4bef7368ff29a9e31210d4fd9a11fcce3.jpg) +(a) Analytic-DDPM, $K = 10$ + +![](images/87983709f5c8a26723b6eb17115c23d6462bf7f535303d01df8382527a9be907.jpg) +Figure 18: Generated samples on CIFAR10 (LS). + +![](images/a8e45771514afd51c65d13cf1f18be5f3ac4772bbb81ec9e86c93f9472593067.jpg) +(c) Analytic-DDPM, $K = 1000$ + +![](images/2fff23188a5f3071962384d59dc58dbd206443abfee5b9d98c39168e32af9218.jpg) +(d) Analytic-DDIM, $K = 10$ +Figure 19: Generated samples on CIFAR10 (CS). + +![](images/4e6877ed03d7f4e014ac8ce04e507c980ba777a7134e7c791cb5ad5eccec9706.jpg) +(b) Analytic-DDPM, $K = 100$ +(e) Analytic-DDIM, $K = 100$ + +![](images/3d9b6f5167fdfd951bd9309d95893d962de21fab40f7ecb9fb9ecac01670d301.jpg) +(f) Analytic-DDIM, $K = 1000$ + +![](images/a9b633d69a0fc4d9ded33a15148dd7b45ef3a6ee90c974b990364d9193b64dd4.jpg) +(a) Analytic-DDPM, $K = 10$ + +![](images/002434d51bacfc67497a1a75dc57f72d08a7761add18c44d4cf9f1799860273a.jpg) +(b) Analytic-DDPM, $K = 100$ + +![](images/700e386a7e732c07522c1d93195fd1932e98e42289cf127482d3947aca22d6f1.jpg) +(c) Analytic-DDPM, $K = 1000$ + +![](images/5ed1bc2fefe694e13b961ddaed5d15d8dabefab507b6296d37d3ef0062c20f7e.jpg) +(d) Analytic-DDIM, $K = 10$ + +![](images/ea08521d585541c9df65cb23af59b0950abb453edf719139a9bb3c1e4b8f51a6.jpg) +(e) Analytic-DDIM, $K = 100$ + +![](images/0c39e46228fbb6efb769a58c00794c76b4e65348e6ab8f747c186fd40d2976e7.jpg) +(f) Analytic-DDIM, $K = 1000$ + +![](images/4d2d956f956ce47ffd2440fc4cb7a061e636ee09f5ed67a0ae31412cae9310b0.jpg) +(a) Analytic-DDPM, $K = 25$ + +![](images/477b3b154644290cda7f5720d122162a37170d4751123b397c3070497646fa20.jpg) +Figure 20: Generated samples on CelebA 64x64. + +![](images/ae0ca47f3e9c67a6c1b360d30a2fb849df1e773a70d00cc0c5d05a336d6d06ae.jpg) +(c) Analytic-DDPM, $K = 4000$ + +![](images/ef2c617a22ea3391b82d516c9c6d8abde17d1fff0e1380471b280572a9f30915.jpg) +(d) Analytic-DDIM, $K = 25$ + +![](images/16ce90ac496b1932371abf6a5a894ec41329121dc9cc7879923ef2dc67bdc972.jpg) +(b) Analytic-DDPM, $K = 200$ +(e) Analytic-DDIM, $K = 200$ +Figure 21: Generated samples on ImageNet 64x64. + +![](images/657d9a1f8cb4dc0cf1904b0456add4242f71bb6abe722717dc716bb1499cd598.jpg) +(f) Analytic-DDIM, $K = 4000$ + +# H ADDITIONAL DISCUSSION + +# H.1 THE EXTRA COST OF THE MONTE CARLO ESTIMATE + +The extra cost of the Monte Carlo estimate $\Gamma$ is small compared to the whole inference cost. In fact, the Monte Carlo estimate requires $MN$ additional model function evaluations. During inference, suppose we generate $M_{1}$ samples or calculate the log-likelihood of $M_{1}$ samples with $K$ timesteps. Both DPMs and Analytic-DPMs need $M_{1}K$ model function evaluations. Employing the same score-based models, the relative additional cost of Analytic-DPM is $\frac{MN}{M_1K}$ . As shown in Appendix G.2, a very small $M$ (e.g., $M = 10, 100$ ) is sufficient for Analytic-DPM, making the relative additional cost small if not negligible. For instance, on CIFAR10, let $M = 10$ , $N = 1000$ , $M_{1} = 50000$ and $K \geq 10$ , we obtain $\frac{MN}{M_1K} \leq 0.02$ and Analytic-DPM still consistently improves the baselines as presented in Table 6. + +Further, the additional calculation of the Monte Carlo estimate occurs only once given a pretrained model and training dataset, since we can save the results of $\Gamma = (\Gamma_1,\dots ,\Gamma_N)$ in Eq.(8) and reuse it among different inference settings (e.g., trajectories of various $K$ ). The reuse is valid, because the marginal distribution of a shorter forward process $q(\pmb {x}_0,\pmb{x}_{\tau_1},\dots ,\pmb{x}_{\tau_K})$ at timestep $\tau_{k}$ is the same as that of the full-timesteps forward process $q(\pmb{x}_{0:N})$ at timestep $n = \tau_k$ . Indeed, in our experiments (e.g., Table 1,2), $\Gamma$ is shared across different selections of $K$ , trajectories and forward processes. Moreover, in practice, $\Gamma$ can be calculated offline and deployed together with the pretrained model and the online inference cost of Analytic-DPM is exactly the same as DPM. + +# H.2 THE STOCHASTICITY OF THE VARIATIONAL BOUND AFTER PLugging The ANALYTIC ESTIMATE + +In this part, we write $L_{\mathrm{vb}}$ as $L_{\mathrm{vb}}(\sigma_n^2)$ to emphasize its dependence on the reverse variance $\sigma_n^2$ . + +When calculating the variational bound $L_{\mathrm{vb}}(\sigma_n^2)$ (i.e., the negative ELBO) of Analytic-DPM, we will plug $\hat{\sigma}_n^2$ into the variational bound and get $L_{\mathrm{vb}}(\hat{\sigma}_n^2)$ . Since $\hat{\sigma}_n^2$ is calculated by the Monte Carlo method, $L_{\mathrm{vb}}(\hat{\sigma}_n^2)$ is a stochastic variable. A natural question is that whether $L_{\mathrm{vb}}(\hat{\sigma}_n^2)$ is a stochastic bound of $L_{\mathrm{vb}}(\mathbb{E}[\hat{\sigma}_n^2])$ , which can be judged by the Jensen's inequality if $L_{\mathrm{vb}}$ is convex or concave. However, this is generally not guaranteed, as stated in Proposition 2. + +Proposition 2. $L_{\mathrm{vb}}(\sigma_n^2)$ is neither convex nor concave w.r.t. $\sigma_n^2$ + +Proof. Since $\sigma_n^2$ only influences the $n$ -th term $L_{n}$ in the variational bound $L_{\mathrm{vb}}$ , where + +$$ +L _ {n} = \left\{ \begin{array}{c c} \mathbb {E} _ {q} D _ {\mathrm {K L}} (q (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n}, \boldsymbol {x} _ {0}) | | p (\boldsymbol {x} _ {n - 1} | \boldsymbol {x} _ {n})) & 2 \leq n \leq N \\ - \mathbb {E} _ {q} \log p (\boldsymbol {x} _ {0} | \boldsymbol {x} _ {1}) & n = 1 \end{array} \right., +$$ + +we only need to study the convexity of $L_{n}$ w.r.t. $\sigma_n^2$ + +When $2 \leq n \leq N$ , + +$$ +L _ {n} = \frac {d}{2} \left(\frac {\lambda_ {n} ^ {2}}{\sigma_ {n} ^ {2}} - 1 + \log \frac {\sigma_ {n} ^ {2}}{\lambda_ {n} ^ {2}} + \frac {1}{\sigma_ {n} ^ {2}} \mathbb {E} _ {q} \frac {| | \tilde {\boldsymbol {\mu}} (\boldsymbol {x} _ {n} , \boldsymbol {x} _ {0}) - \boldsymbol {\mu} _ {n} (\boldsymbol {x} _ {n}) | | ^ {2}}{d}\right). +$$ + +Let $A = \lambda_n^2 + \mathbb{E}_q \frac{||\tilde{\mu}(\boldsymbol{x}_n, \boldsymbol{x}_0) - \boldsymbol{\mu}_n(\boldsymbol{x}_n)||^2}{d}$ , then $L_n$ as a function of $\sigma_n^2$ is convex when $0 < \sigma_n^2 < 2A$ and concave when $2A < \sigma_n^2$ . Thereby, $L_{\mathrm{vb}}(\sigma_n^2)$ is neither convex nor concave w.r.t. $\sigma_n^2$ . + +Nevertheless, in this paper, $L_{\mathrm{vb}}(\hat{\sigma}_n^2)$ is a stochastic upper bound of $L_{\mathrm{vb}}(\sigma_n^{*2})$ because $L_{\mathrm{vb}}(\sigma_n^{*2})$ is the optimal. The bias of $L_{\mathrm{vb}}(\hat{\sigma}_n^2)$ w.r.t. $L_{\mathrm{vb}}(\sigma_n^{*2})$ is due to the Monte Carlo method as well as the error of the score-based model. The former can be reduced by increasing the number of Monte Carlo samples. The latter is irreducible if the pretrained model is fixed, which motivates us to clip the estimate, as discussed in Section 3.1. + +# H.3 COMPARISON TO OTHER GAUSSIAN MODELS AND THEIR RESULTS + +The reverse process of DPMs is a Markov process with Gaussian transitions. Thereby, it is interesting to compare it with other Gaussian models, e.g., the expectation propagation (EP) with the Gaussian process (GP) (Kim & Ghahramani, 2006). + +Both EP and Analytic-DPM use moment matching as a key step to find analytic solutions of $D_{\mathrm{KL}}(p_{target}||p_{opt})$ terms. However, to our knowledge, the relation between moment matching and DPMs has not been revealed in prior literature. Further, compared to EP, we emphasize that it is highly nontrivial to calculate the second moment of $p_{target}$ in DPMs because $p_{target}$ involves an unknown and potentially complicated data distribution. + +In EP with GP (Kim & Ghahramani, 2006), $p_{target}$ is the product of a single likelihood factor and all other approximate factors for tractability. In fact, the form of the likelihood factor is chosen such that the first two moments of $p_{target}$ can be easily computed or approximated. For instance, the original EP (Minka, 2001) considers Gaussian mixture likelihood (or Bernoulli likelihood for classification) and the moments can be directed obtained by the properties of Gaussian (or integration by parts). Besides, at the cost of the tractability, there is no converge guarantee of EP in general. + +In contrast, $p_{\text{target}}$ in this paper is the conditional distribution $q(\pmb{x}_{n-1}|\pmb{x}_n)$ of the corresponding joint distribution $q(\pmb{x}_{0:N})$ defined by the forward process. Note that the moments of $q(\pmb{x}_{n-1}|\pmb{x}_n)$ are nontrivial to calculate because it involves an unknown and potentially complicated data distribution. Technically, in Lemma 13, we carefully use the law of total variance conditioned on $\pmb{x}_0$ and convert the second moment of $q(\pmb{x}_{n-1}|\pmb{x}_n)$ to that of $q(\pmb{x}_0|\pmb{x}_n)$ , which surprisingly can be expressed as the score function as proven in Lemma 11. + +# H.4 FUTURE WORKS + +In our work, we mainly focus on image data. It would be interesting to apply Analytic-DPM to other data modalities, e.g. speech data (Chen et al., 2020). As presented in Appendix E, our method can be applied to continuous DPMs, e.g., variational diffusion models (Kingma et al., 2021) that learn the forward noise schedule. It is appealing to see how Analytic-DPM works on these continuous DPMs. Finally, it is also interesting to incorporate the optimal reverse variance in the training process of DPMs. \ No newline at end of file diff --git a/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/images.zip b/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2071da92c483649c7d1c255d0c0a30905e456c26 --- /dev/null +++ b/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fda9c558137b30f8d6c49a8f96d9b07a57628c64c86f030223e358f60e20fe5 +size 3756025 diff --git a/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/layout.json b/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1f64cea986db9c5ebf3999892035627d1c2db017 --- /dev/null +++ b/analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:676a98c5f27874a217c1843ff3d2a4771e340218e19f2552cb863da08ee88c95 +size 1595727 diff --git a/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_content_list.json b/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..07e5fcf5eee6d74cac6128b196298237b5b60ed6 --- /dev/null +++ b/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8db4e96c12a39ee59ecda82c86d61ee6a8cc4fb8990e2488d035bc5d54f6f293 +size 149307 diff --git a/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_model.json b/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..771deddde0006b4091d2c2750ce7da0c4d027a23 --- /dev/null +++ b/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:673a66355f58b033f220126493795b1f91a7478ef3abc1179d09e61c903df6bc +size 178520 diff --git a/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_origin.pdf b/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6e09f694ff7d76b5c1218f4bdf79d2331382499e --- /dev/null +++ b/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f0ebaca06f596b87772bb128c63f4017194108be0c208bb949d41a0f4ecd6cd +size 590348 diff --git a/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/full.md b/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a035bcb57fb634070d54e38a2613a7dc980085be --- /dev/null +++ b/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/full.md @@ -0,0 +1,563 @@ +# A NEW PERSPECTIVE ON "HOW GRAPH NEURAL NETWORKS GO BEYOND WEISFEILER-LEHMAN?" + +Asiri Wijesinghe & Qing Wang + +School of Computing, Australian National University, Canberra, Australia + +{asiri.wijesinghe,qing.wang}@anu.edu.au + +# ABSTRACT + +We propose a new perspective on designing powerful Graph Neural Networks (GNNs). In a nutshell, this enables a general solution to inject structural properties of graphs into a message-passing aggregation scheme of GNNs. As a theoretical basis, we develop a new hierarchy of local isomorphism on neighborhood subgraphs. Then, we theoretically characterize how message-passing GNNs can be designed to be more expressive than the Weisfeiler Lehman test. To elaborate this characterization, we propose a novel neural model, called GraphSNN, and prove that this model is strictly more expressive than the Weisfeiler Lehman test in distinguishing graph structures. We empirically verify the strength of our model on different graph learning tasks. It is shown that our model consistently improves the state-of-the-art methods on the benchmark tasks without sacrificing computational simplicity and efficiency. + +# 1 INTRODUCTION + +Many Graph Neural Networks (GNNs) employ a message-passing aggregation scheme to learn low-dimensional vector space representations for nodes in a graph (Kipf & Welling, 2017; Velicković et al., 2017; Hamilton et al., 2017; Gilmer et al., 2017; Sato, 2020; Loukas, 2020; de Haan et al., 2020). Let $G = (V, E)$ be a graph. For each node $v \in V$ , a message-passing aggregation scheme recursively aggregates the feature vectors of nodes in the neighborhood of $v$ and combines the aggregated information with the feature vector of $v$ itself to obtain a representation. Since there is no natural ordering on nodes, such message-passing aggregation schemes are usually required to be permutation-invariant (Maron et al., 2018; Keriven & Peyré, 2019; Garg et al., 2020). + +Despite advances of GNNs in various graph learning tasks such as node classification (Kipf & Welling, 2017; Xu et al., 2018), graph classification (Xu et al., 2019; Wu et al., 2019) and link prediction (Zhang & Chen, 2017), there is still a lack of theoretical understanding of how to design powerful and practically useful GNNs that can capture rich structural information of graphs. Recent studies (Xu et al., 2019; Morris et al., 2019) have explored the connections between GNNs and the Weisfeiler-Lehman (WL) test (Weisfeiler & Leman, 1968). By representing a neighborhood as a multiset of feature vectors and treating the neighborhood aggregation as an aggregation function over multisets, Xu et al. (2019) showed that message-passing GNNs are at most as powerful as the WL test in distinguishing graph structures. However, many simple graph structures still cannot be distinguished by the WL test, e.g., $G_{1}$ and $G_{2}$ shown in Figure 1. A question is: how to design expressive yet simple GNNs that can go beyond the WL test with a theoretically provable guarantee? + +Recently, there have been three main directions of extending GNNs beyond WL: (1) building GNNs for higher-order WL (i.e. $k$ -WL with $k \geq 3$ ) or variants (Maron et al., 2019; Morris et al., 2020; 2019); (2) counting on pre-defined substructures as additional features (Bouritsas et al., 2020); (3) augmenting node identifiers or random features into GNNs (You et al., 2021; Vignac et al., 2020; Sato et al., 2021). Unlike these works, we aim to introduce a general solution upon which GNNs can be enhanced to capture structural properties of graphs. This solution enables GNNs to provably be more expressive than the Weisfeiler-Lehman test, but still computationally efficient. It overcomes the following limitations of existing works. Compared with higher-order WL methods in (1) which require high computational overhead and are impractical, our method goes beyond the WL test but is still computationally efficient. Compared with the methods on counting substructures in (2), our + +![](images/2a9abac58962af6cd93dd3c08f010d02d90106e1aff438c5db5a6de65ccb77b5.jpg) +Figure 1: An overview of our proposed framework for GNNs that can go beyond the WL test in distinguishing non-isomorphic graphs $G_{1}$ and $G_{2}$ . The overlap subgraphs of $G_{1}$ and $G_{2}$ are structurally different, which are captured by structural coefficients defined in Eq. 4. + +method does not require to handcraft substructures. Compared with the methods of augmenting node identifiers or random features in (3), our method can flexibly quantify local structures (see examples in Figure 3) and also capture different classes of local structures w.r.t. different graph learning tasks. + +Our work is grounded in three observations: (i) Treating a neighborhood as a multiset of feature vectors ignores the rich structure information among vertices in the neighborhood, thereby limiting the representational capacity of the model. Thus, we represent a neighborhood as a neighborhood subgraph in which vertices are structurally related, and show that the WL test is only as powerful as distinguishing neighborhood subgraphs in terms of their subtree structures in the neighborhood. (ii) There exists a natural class of isomorphic graphs, which strictly lies in between neighborhood subgraph isomorphism and neighborhood subtree isomorphism. We call it overlap (subgraph) isomorphism. The notion of overlap subgraph enables us to characterize structural interactions of vertices and inject them into a message-passing aggregation scheme for GNNs. (iii) By designing a proper function for quantifying structural interactions of vertices and preserving the injectiveness of a message-passing aggregation scheme, more expressive GNNs can be developed. We propose a new GNN model that is strictly more expressive than the WL test to demonstrate an instance of this kind. + +Contributions. In summary, the main contributions of this work are as follows: + +- We introduce a new hierarchy of local isomorphism to characterise different classes of local structures in neighborhood subgraphs, and discuss its connections with the WL test and GNNs (Section 2 and Theorems 1-2). +- We develop a simple yet powerful framework to inject structural properties into a message-passing aggregation scheme, and theoretically characterize how GNNs can be designed to be more expressive beyond the WL test (Section 3 and Theorem 3). +- We propose a novel neural model for graph learning, called GraphSNN, and prove that GraphSNN is strictly more expressive than the WL test in distinguishing graph structures (Section 4 and Theorem 4). +- We show that, due to the way of injecting structural properties into a structured-message-passing aggregation scheme, GraphSNN can overcome the oversmoothing issue (Chen et al., 2020a; Zhao & Akoglu, 2019; Li et al., 2018) (Section 5.4). + +We have conducted experiments on benchmark tasks (Hu et al., 2020). The experimental results show that our model is highly efficient and can significantly improve the state-of-the-art methods without sacrificing computational simplicity. + +Related work. Weisfeiler-Lehman (WL) hierarchy is a well-established framework for graph isomorphism tests (Grohe, 2017). Introduced by Weisfeiler and Lehman (Weisfeiler & Leman, 1968), the Weisfeiler-Lehman algorithm (also called 1-WL or color refinement) is a computationally efficient heuristic for testing graph isomorphism (Babai & Kucera, 1979). It is known that k-WL is strictly more powerful than (k-1)-WL when $\mathrm{k} \geq 3$ (Cai et al., 1992; Grohe, 2017). + +Message-passing GNNs are typically considered as a differentiable neural generalization of the Weisfeiler-Lehman algorithms on graphs. It has been reported (Xu et al., 2019) that some popular GNNs such as GCN (Kipf & Welling, 2017) and GraphSAGE (Hamilton et al., 2017) are at most powerful as 1-WL in distinguishing graph structures. Xu et al. (2019) has shown that Graph Isomorphism Network (GIN) can be as powerful as 1-WL. At its core, GIN provides an injective aggregation scheme that is defined as a function over multisets of feature vectors, and thus GIN has the representational power to map any two different multisets of feature vectors to different representations in an embedding space. + +A considerable amount of efforts has been devoted to improve the expressive power of GNNs beyond 1-WL. Generally, there are three directions: (1) Several works proposed higher-order variants of GNNs that are as powerful as k-WL with $k \geq 3$ (Azizian & Lelarge, 2020). For example, Morris et al. (2019) introduced k-order graph networks that are expressive as a set-based variant of k-WL, Maron et al. (2019) proposed a reduced 2-order graph network that is as expressive as 3-WL, and Morris et al. (2020) proposed a local version of k-WL which considers only a subset of vertices in a neighborhood. However, these more expressive GNNs are impractical to use due to their inherent high computational costs and sophisticated design. (2) Some works attempted to incorporate inductive biases based on isomorphism counting on pre-defined topological features such as triangles, cliques, and rings (Bouritsas et al., 2020; Liu et al., 2020; Monti et al., 2018), similar to the traditional ideas of graph kernels (Yanardag & Vishwanathan, 2015). However, pre-defining topological features requires domain-specific expertise, which is often not readily available. (3) Most recently, several works explored the ideas of augmenting GNNs using node identifiers or random features. For example, Vignac et al. (2020) proposed a method that maintains a "local context" for each node based on manipulating node identifiers in a permutation equivariant way. You et al. (2021) developed ID-GNNs by taking into account the identity information of vertices. Chen et al. (2020b) and Murphy et al. (2019) assigned one-hot IDs to vertices based on the ideas of relational pooling. Sato et al. (2021) added a random feature to each node to improve the representational capability of GNNs. + +Our work is fundamentally different from existing models by injecting properties of structural interactions among vertices based on a natural class of isomorphic graphs in the local neighborhood (i.e., overlap subgraph isomorphism) into a message-passing aggregation scheme of GNNs. + +# 2 A NEW HIERARCHY OF LOCAL ISOMORPHISM + +In this section, we characterize a hierarchy of graph isomorphism based on local neighborhood subgraphs and explore its connections to 1-WL. + +Let $G = (V, E)$ be a simple, undirected graph with a set $V$ of vertices and a set $E$ of edges. The set of neighbors of a vertex $v$ is denoted by $\mathcal{N}(v) = \{u \in V | (v, u) \in E\}$ . The neighborhood subgraph of a vertex $v$ , denoted by $S_v$ , is the subgraph induced in $G$ by $\tilde{\mathcal{N}}(v) = \mathcal{N}(v) \cup \{v\}$ , which contains all edges in $E$ that have both endpoints in $\tilde{\mathcal{N}}(v)$ . For two adjacent vertices $v$ and $u$ , i.e., $(v, u) \in E$ , the overlap subgraph $S_{vu}$ between $v$ and $u$ is defined as $S_{vu} = S_v \cap S_u$ . + +Let $S_{i}$ and $S_{j}$ be the neighborhood subgraphs of two vertices $i$ and $j$ that are not necessarily adjacent, and $h_v$ be the feature vector of a vertex $v \in V$ . In the following, we define three notions of isomorphism, which correspond to different classes of local structures in neighborhood subgraphs. + +Definition 1. $S_{i}$ and $S_{j}$ are subgraph-isomorphic, denoted as $S_{i} \simeq_{subgraph} S_{j}$ , if there exists a bijective mapping $g: \tilde{\mathcal{N}}(i) \to \tilde{\mathcal{N}}(j)$ such that $g(i) = j$ and for any two vertices $v_{1}, v_{2} \in \tilde{\mathcal{N}}(i)$ , $v_{1}$ and $v_{2}$ are adjacent in $S_{i}$ iff $g(v_{1})$ and $g(v_{2})$ are adjacent in $S_{j}$ , and $h_{v_{1}} = h_{g(v_{1})}$ and $h_{v_{2}} = h_{g(v_{2})}$ . + +Definition 2. $S_{i}$ and $S_{j}$ are overlap-isomorphic, denoted as $S_{i} \simeq_{\text{overlap}} S_{j}$ , if there exists a bijective mapping $g: \tilde{\mathcal{N}}(i) \to \tilde{\mathcal{N}}(j)$ such that $g(i) = j$ and for any $v' \in \mathcal{N}(i)$ and $g(v') = u'$ , $S_{iv'}$ and $S_{ju'}$ are subgraph-isomorphic. + +Definition 3. $S_{i}$ and $S_{j}$ are subtree-isomorphic, denoted as $S_{i} \simeq_{\text{subtree}} S_{j}$ , if there exists a bijective mapping $g: \tilde{\mathcal{N}}(i) \to \tilde{\mathcal{N}}(j)$ such that $g(i) = j$ and for any $v' \in \tilde{\mathcal{N}}(i)$ and $g(v') = u'$ , $h_{v'} = h_{u'}$ . + +Theorem 1 states that there is a hierarchy among these notions of local isomorphism on neighborhood subgraphs, where subgraph-isomorphism is the strongest one, subtree-isomorphism is the weakest, + +![](images/0df298d3a2e90274fa4cb513c992116d9b68e61c803adb319ff5236e0b08772c.jpg) +Figure 2: (a) $S_{i}$ and $S_{j}$ are overlap-isomorphic (i.e., having the same overlap subgraph) but not subgraph-isomorphic; (b) Four neighborhood subgraphs $\{S_{v_i}|i = 1,2,3,4\}$ are subtree-isomorphic (i.e., having the same subtree) but not overlap-isomorphic. + +and overlap-isomorphism lies in between. Figure 2 shows two groups of graphs: one is distinguishable w.r.t. subgraph-isomorphism but not overlap-isomorphism, while the other is distinguishable by overlap-isomorphism but not subtree-isomorphism. + +Theorem 1. The following statements are true: (a) If $S_{i} \simeq_{\text{subgraph}} S_{j}$ , then $S_{i} \simeq_{\text{overlap}} S_{j}$ ; but not vice versa; (b) If $S_{i} \simeq_{\text{overlap}} S_{j}$ , then $S_{i} \simeq_{\text{subtree}} S_{j}$ ; but not vice versa. + +Let $\mathcal{S} = \{S_v | v \in V\}$ and $\zeta : \mathcal{S} \to \mathbb{R}^d$ mapping each neighborhood subgraph in $\mathcal{S}$ into a node embedding in $\mathbb{R}^d$ . The following theorem states that GNNs that are as powerful as 1-WL can distinguish two neighborhood subgraphs only w.r.t. subtree-isomorphism at each layer. + +Theorem 2. Let $M$ be a GNN. $M$ is as powerful as 1-WL in distinguishing non-isomorphic graphs if $M$ has a sufficient number of layers and each layer can map any $S_{i}$ and $S_{j}$ in $S$ into two different embeddings (i.e., $\zeta(S_{i}) \neq \zeta(S_{j})$ ) if and only if $S_{i} \not\cong_{\text{subtree}} S_{j}$ . + +The complete proofs of these theorems are provided in Appendix C. + +# 3 A GENERALISEDMESSAGE-PASSING FRAMEWORK + +In this section, we present a generalised message-passing framework (GMP) which enables to inject local structure into an aggregation scheme, in light of overlap subgraphs. We theoretically characterize how GNNs can be designed to be more expressive than 1-WL in this framework. + +Let $\mathcal{S}^* = \{S_{vu}|(v,u)\in E\}$ be the set of overlap subgraphs in $G$ . We define structural coefficients for each vertex $v$ and its neighbors, i.e., $\omega : S\times S^{*}\to \mathbb{R}$ such that $A_{vu} = \omega (S_v,S_{vu})$ . A question arising is: what are the desirable properties of such a function $\omega$ ? Ideally, it should quantify how a vertex $v$ structurally interacts with its neighbor $u$ in the local neighborhood. Thus, given $S_{vu} = (V_{vu},E_{vu})$ and $S_{vu'} = (V_{vu'},E_{vu'})$ , a carefully designed $\omega$ should exhibit the following properties: + +(1) Local closeness: $\omega(S_v, S_{vu}) > \omega(S_v, S_{vu'})$ if $S_{vu}$ and $S_{vu'}$ are complete graphs with $S_{vu} = K_i$ , $S_{vu'} = K_j$ , and $i > j$ , where $K_i$ refers to a complete graph on $i$ vertices. +(2) Local denseness: $\omega(S_v, S_{vu}) > \omega(S_v, S_{vu'})$ if $S_{vu}$ and $S_{vu'}$ have the same number of vertices but differ in the number of edges s.t. $|V_{vu}| = |V_{vu'}|$ and $|E_{vu}| > |E_{vu'}|$ . +(3) Isomorphic invariant: $\omega(S_v, S_{vu}) = \omega(S_v, S_{vu'})$ if $S_{vu}$ and $S_{vu'}$ are isomorphic. + +Figure 3 illustrates the first two properties. Let $\{\{\cdot\}\}$ denote a multiset, $\tilde{A} = (\tilde{A}_{vu})_{v,u\in V}$ where $\tilde{A}_{vu}$ is a normalised value of $A_{vu}$ , and $X\in \mathbb{R}^{|V|\times f}$ be a matrix of input feature vectors where $x_{v}\in \mathbb{R}^{f}$ associates each $v\in V$ . We denote the feature vector of $v$ at the t-th layer by $h_v^{(t)}$ and $h_v^{(0)} = x_v$ . Then, the $(t + 1)$ -th layer of an aggregation scheme can be defined as: + +$$ +m _ {a} ^ {(t)} = \operatorname {A G G R E G A T E} ^ {N} \left(\left\{\left(\left(\tilde {A} _ {v u}, h _ {u} ^ {(t)}\right) \mid u \in \mathcal {N} (v) \right\} \right\}\right), \tag {1} +$$ + +$$ +m _ {v} ^ {(t)} = \operatorname {A G G R E G A T E} ^ {I} \left(\left\{\left\{\tilde {A} _ {v u} \mid u \in \mathcal {N} (v) \right\} \right\}\right) h _ {v} ^ {(t)}, \tag {2} +$$ + +$$ +h _ {v} ^ {(t + 1)} = \operatorname {C o m b i n e} \left(m _ {v} ^ {(t)}, m _ {a} ^ {(t)}\right). \tag {3} +$$ + +$\mathrm{AGGREGATE}^N (\cdot)$ and $\mathrm{AGGREGATE}^I (\cdot)$ are two possibly different parameterized functions. Here, $m_{a}^{(t)}$ is a message aggregated from the neighbors of $v$ and their structural coefficients, and $m_v^{(t)}$ is an + +![](images/2f9c9fe635caaa145e65fd8bf9e901582eae9d66a1cc55d6c5ffacd7a44be513.jpg) +Figure 3: (a) Local closeness: for overlap subgraphs that are complete graphs, their structural coefficients increase with the number of vertices; (b) Local denseness: for overlap subgraphs that have the same number of vertices, their structural coefficients increase with the number of edges. + +![](images/6401104429963e93b7105579cafe3a806144d0281c189951c53fb328c1380208.jpg) + +"adjusted" message from $v$ after performing an element-wise multiplication between $AGGREGATE^{I}(\cdot)$ and $h_{v}^{(t)}$ to account for structural effects from its neighbors. Then, $m_{v}^{(t)}$ and $m_{a}^{(t)}$ are combined by $COMBINE(\cdot)$ to obtain the feature vector $h_{v}^{(t + 1)}$ . + +The following theorem states that a GNN can be more expressive than 1-WL if $\omega$ is powerful enough to distinguish structure beyond neighborhood subtrees and the neighborhood aggregation function $\Phi$ is injective under a sufficient number of layers. The proof is provided in Appendix C. + +Theorem 3. Let $M$ be a GNN whose aggregation scheme $\Phi$ is defined by Eq. 1-Eq. 3. $M$ is strictly more expressive than 1-WL in distinguishing non-isomorphic graphs if $M$ has a sufficient number of layers and also satisfies the following conditions: + +(1) $M$ can distinguish at least two neighborhood subgraphs $S_{i}$ and $S_{j}$ with $S_{i} \simeq_{\text{subtree}} S_{j}$ , $S_{i} \not\simeq_{\text{subgraph}} S_{j}$ and $\{\{\tilde{A}_{iv'}|v' \in \mathcal{N}(i)\}\} \neq \{\{\tilde{A}_{ju'}|u' \in \mathcal{N}(j)\}\}$ ; +(2) $\Phi\left(h_v^{(t)}, \{\{h_u^{(t)}|u \in \mathcal{N}(v)\}\}, \{\{(\tilde{A}_{vu}, h_u^{(t)})|u \in \mathcal{N}(v)\}\}\right)$ is injective. + +# 4 GRAPHSNN + +Generally, there are many different ways of designing $\omega$ and $\Phi$ functions, leading to GNNs with different expressive powers. To elaborate this, we propose a novel GNN model, named GraphSNN, whose aggregation scheme is an instantiation of our generalised message-passing framework. We prove that the expressive power of GraphSNN goes beyond 1-WL. + +Model design. In the following, we provide a definition of $\omega$ that satisfies the properties of local closeness, local denseness, and isomorphic invariant. One key idea behind this definition is to make it capable of being generalized to support different graph learning tasks, controlled by $\lambda > 0$ (will be further discussed in Section 5.3): + +$$ +\omega \left(S _ {v}, S _ {v u}\right) = \frac {\left| E _ {v u} \right|}{\left| V _ {v u} \right| \cdot \left| V _ {v u} - 1 \right|} \left| V _ {v u} \right| ^ {\lambda}. \tag {4} +$$ + +This definition allows us to formulate a weighted adjacency matrix $A = (A_{vu})_{v,u\in V}$ for GraphSNN. To compare structural coefficients across different nodes, we normalize $A$ to $\tilde{A}$ by $\tilde{A}_{vu} = \frac{A_{vu}}{\sum_{u\in\mathcal{N}(v)}A_{vu}}$ . Alternatively, $A$ can be normalized using Softmax or other normalization techniques. For each vertex $v\in V$ , the feature vector at the $(t + 1)$ -th layer is generated by + +$$ +h _ {v} ^ {(t + 1)} = \mathrm {M L P} _ {\theta} \left(\gamma^ {(t)} \left(\sum_ {u \in \mathcal {N} (v)} \tilde {A} _ {v u} + 1\right) h _ {v} ^ {(t)} + \sum_ {u \in \mathcal {N} (v)} \left(\tilde {A} _ {v u} + 1\right) h _ {u} ^ {(t)}\right), \tag {5} +$$ + +where $\gamma^{(t)}$ is a learnable scalar parameter. Since $\mathcal{N}(v)$ refers to one-hop neighbors of $v$ , one can stack multiple layers to handle more than one-hop neighborhood. Note that, to ensure the injectivity in the feature aggregation in the presence of structural coefficients, we add 1 into the first and second terms in Eq. 5. This design is critical for guaranteeing the expressiveness of GraphSNN beyond 1-WL, as will be discussed in the proofs of the lemmas and Theorem 4 later. + +Expressiveness analysis. We first generalise the result of universal functions over multisetts (Xu et al., 2019) to universal functions over pairs of multisetts since Eq. 5 involves not only node features but + +
MethodCoraCiteSeerPubmedNELLogbn-arxiv
GCN81.5 ± 0.470.3 ± 0.579.0 ± 0.566.0 ± 1.771.74 ± 0.29
GraphSNNGCN83.1 ± 1.872.3 ± 1.579.8 ± 1.268.3 ± 1.672.20 ± 0.90
GAT83.0 ± 0.672.6 ± 0.678.5 ± 0.3--
GraphSNNGAT83.8 ± 1.273.5 ± 1.679.6 ± 1.4--
GIN77.6 ± 1.166.1 ± 1.577.0 ± 1.261.5 ± 2.3-
GraphSNNGIN79.2 ± 1.768.3 ± 1.578.8 ± 1.363.8 ± 2.7-
GraphSAGE79.2 ± 3.771.6 ± 1.977.4 ± 2.263.7 ± 5.271.49 ± 0.27
GraphSNNGraphSAGE80.5 ± 2.572.7 ± 3.279.0 ± 3.566.3 ± 5.671.80 ± 0.70
+ +Table 1: Classification accuracy (%) averaged over 10 runs on node classification. + +also structural coefficients. Assume that $\mathcal{H}$ , $\mathcal{A}$ and $\mathcal{W}$ are countable sets where $\mathcal{H}$ is a node feature space, $\mathcal{A}$ is a structural coefficient space, and $\mathcal{W} = \{A_{ij}h_i|A_{ij}\in \mathcal{A}, h_i\in \mathcal{H}\}$ . Let $H$ and $W$ be two multisets containing elements from $\mathcal{H}$ and $\mathcal{W}$ , respectively, and $|H| = |W|$ . We can prove Lemma 1, Lemma 2 and Theorem 4 below, where the proof details are provided in Appendix C. + +Lemma 1. There exists a function $f$ s.t. $\pi(H, W) = \sum_{h \in H, w \in W} f(h, w)$ is unique for any distinct pair of multisets $(H, W)$ . + +Then, the injectiveness of $\pi (H,W)$ can be extended to $\pi^{\prime}(a,H,W)$ as in the lemma below. + +Lemma 2. There exists a function $f$ s.t. $\pi^{\prime}(h_v,H,W) = \gamma f(h_v,|H|h_v) + \sum_{h\in H,w\in W}f(h,w)$ is unique for any distinct $(h_v,H,W)$ , where $h_v\in \mathcal{H}$ , $|H|h_v\in \mathcal{W}$ , and $\gamma$ can be an irrational number. + +Since any function over $(h_v, H, W)$ can be decomposed as $g(\gamma f(h_v, |H| h_v) + \sum_{h \in H, w \in W} f(h, w))$ , similar to Xu et al. (2019), we use a parameterized multi-layer perceptron (MLP) to learn $f$ and $g$ . The following theorem characterizes the expressive power of GraphSNN. + +Theorem 4. GraphSNN is more expressive than 1-WL in testing non-isomorphic graphs. + +Since GIN is as powerful as 1-WL (Xu et al., 2019), this theorem implies that GraphSNN is more expressive than GIN, i.e., GraphSNN can map at least two different neighborhood subgraphs that correspond to the same multiset of feature vectors to different representations. + +Complexity analysis. Similar to GCN and GIN, GraphSNN is computationally efficient. The time complexity and memory complexity are linear w.r.t. the number of edges in a graph. Further, due to the locality of GraphSNN, the computation of aggregating feature vectors from neighborhood subgraphs at each layer can be parallelized across all vertices. Structural coefficients can be precomputed with the time complexity $O(ml)$ , where $m$ is the number of edges and $l$ is the maximum degree of vertices in a graph, and this computation can also be parallelized across all edges. Table 9 in Appendix A summarizes the time and space complexities of several popular message-passing GNNs in comparison with GraphSNN. + +# 5 NUMERICAL EXPERIMENTS + +In this section, we evaluate our models on node classification and graph classification benchmark tasks. All the results of our models are statistically significant at 0.05 level of significance. + +# 5.1 NODE CLASSIFICATION + +Datasets. We use five datasets: three citation network datasets Cora, Citeseer, and Pubmed (Sen et al., 2008) for semi-supervised document classification, one knowledge graph dataset NELL (Carlson et al., 2010) for semi-supervised entity classification, and one OGB dataset ogbn-arxiv from (Hu et al., 2020). Table 10 in Appendix B contains statistics for these datasets. + +**Baseline methods.** We consider the popular message-passing GNNs: GCN (Kipf & Welling, 2017), GAT (Veličković et al., 2017), GIN (Xu et al., 2019), and GraphSAGE (Hamilton et al., 2017). For each of these baselines, we construct a $\mathrm{GraphSNN}_M$ model by replacing its aggregation scheme by our aggregation scheme, which is detailed in Appendix A. The purpose of this setup is to evaluate + +
MethodMUTAGPTC-MRPROTEINSD&DBZRCOX2IMDB-BRDT-M5K
WL90.4 ± 5.759.9 ± 4.375.0 ± 3.179.4 ± 0.378.5 ± 0.681.7 ± 0.773.8 ± 3.952.5 ± 2.1
RetGK90.3 ± 1.162.5 ± 1.675.8 ± 0.681.6 ± 0.3--71.9 ± 1.0-
GNTK90.0 ± 8.567.9 ± 6.975.6 ± 4.275.6 ± 3.983.6 ± 2.9-76.9 ± 3.6-
P-WL90.5 ± 1.364.0 ± 0.875.2 ± 0.378.6 ± 0.3----
WL-PM87.7 ± 0.861.4 ± 0.8-78.6 ± 0.2----
WWL87.2 ± 1.566.3 ± 1.274.2 ± 0.579.6 ± 0.584.4 ± 2.078.2 ± 0.474.3 ± 0.8-
FGW88.4 ± 5.665.3 ± 7.974.5 ± 2.7-85.1 ± 4.177.2 ± 4.863.8 ± 3.4-
DGCNN85.8 ± 1.758.6 ± 2.575.5 ± 0.979.3 ± 0.9--70.0 ± 0.948.7 ± 4.5
CapsGNN86.6 ± 6.866.0 ± 1.876.2 ± 3.675.4 ± 4.1--73.1 ± 4.852.9 ± 1.5
†GraphSAGE85.1 ± 7.663.9 ± 7.775.9 ± 3.272.9 ± 2.0--72.3 ± 5.350.0 ± 1.3
†GIN89.4 ± 5.664.6 ± 7.075.9 ± 2.8---75.1 ± 5.157.5 ± 1.5
†GraphSNN (S)91.57 ± 2.866.70 ± 3.776.83 ± 2.581.97 ± 2.688.69 ± 3.282.86 ± 3.177.86 ± 3.658.43 ± 2.3
†GraphSNN (R)91.24 ± 2.566.96 ± 3.576.51 ± 2.582.46 ± 2.788.97 ± 2.983.13 ± 3.576.93 ± 3.358.51 ± 2.7
GraphSNN (S)94.70 ± 1.970.58 ± 3.178.42 ± 2.783.92 ± 2.391.12 ± 3.086.28 ± 3.378.51 ± 2.859.86 ± 2.6
GraphSNN (R)94.14 ± 1.271.01 ± 3.678.21 ± 2.984.61 ± 1.591.88 ± 3.286.72 ± 2.977.87 ± 3.160.23 ± 2.2
+ +Table 2: Classification accuracy (%) averaged over 10 runs on graph classification. The results of WL and RetGK are taken from (Du et al., 2019), GraphSAGE from (Xu et al., 2019), DGCNN from (Maron et al., 2019) and others from their original papers. $\dagger$ indicates the reporting setting used in GIN and further details on the experimental settings are discussed in Appendix B. + +how effectively our aggregation scheme with structural coefficients can learn representations for vertices, compared with the standard message-passing aggregation scheme. + +Experimental setup. We use the Adam optimizer (Kingma & Ba, 2015) and $\lambda = 1$ . For ogbnarxiv, our models are trained for 500 epochs with the learning rate 0.01, dropout 0.5, hidden units 256, and $\gamma = 0.1$ . For the other datasets, we use 200 epochs with the learning rate 0.001, and choose the best values for weight decay from $\{0.001, 0.002, \dots, 0.009\}$ and hidden units from $\{64, 128, 256, 512\}$ . For $\gamma$ and dropout at each layer, the best value for each model in each dataset is selected from $\{0.1, 0.2, \dots, 0.6\}$ . GraphSNN $_{GAT}$ uses the attention dropout 0.6 and 8 multi-attention heads. GraphSNN $_{GraphSAGE}$ uses the neighborhood sample size 25 with the mean aggregation. + +We consider two settings of data splits for all datasets except for ogbn-arxiv: (1) the standard splits in Kipf & Welling (2017), i.e., 20 nodes from each class for training, 500 nodes for validation and 1000 nodes for testing, for which the results are presented in Table 1; (2) the random splits in Pei et al. (2020), i.e., randomly splitting nodes into $60\%$ , $20\%$ and $20\%$ for training, validation and testing, respectively, for which the results are presented in Table 13 in Appendix B. For ogbn-arxiv, we follow Hu et al. (2020) to use a time-based data split based on publication dates. + +# 5.2 GRAPH CLASSIFICATION + +We evaluate GraphSNN from three aspects: (1) small standard graph datasets, (2) large graph datasets and (3) comparison with GNNs that are go beyond 1-WL. + +Experiments on small graphs. We use eight datasets from two categories: (1) bioinformatics datasets: MUTAG, PTC-MR, COX2, BZR, PROTEINS, and D&D (Debnath et al., 1991; Kriege et al., 2016; Wale et al., 2008; Shervashidze et al., 2011; Sutherland et al., 2003; Borgwardt & Kriegel, 2005); (2) social network datasets: IMDB-B and RDT-M5K (Yanardag & Vishwanathan, 2015). Table 11 in Appendix B contains statistics for these small graph datasets. + +We compare against eleven baselines: (1) Graph kernel based methods: WL subtree kernel (Shervashidze et al., 2011), RetGK (Zhang et al., 2018b), GNTK (Du et al., 2019), P-WL (Rieck et al., 2019), WL-PM (Nikolentzos et al., 2017), WWL (Togninalli et al., 2019) and FGW (Titouan et al., 2019); (2) GNN based methods: DGCNN (Zhang et al., 2018a), CapsGNN (Xinyi & Chen, 2018), GIN (Xu et al., 2019), and GraphSAGE (Hamilton et al., 2017). + +Both the standard stratified splits (Xu et al., 2019) and the random splits are considered. We use 10-fold cross validation with $90\%$ training and $10\%$ testing, and report the best mean accuracy. For both settings, we use the Adam optimizer (Kingma & Ba, 2015), batch size 64, hidden dimension 64, weight decay of 0.009, a 2-layer MLP with batch normalization, 500 epochs and dropout of 0.6, and $\gamma = 0.1$ over all datasets. The readout function as in (Xu et al., 2019) is used which concatenates representations of all layers to obtain a final graph representation. For the standard stratified splits, we use the learning rate 0.009 over all datasets. For the random splits, we use the learning rate 0.008 for MUTAG and RDT-M5K, and 0.007 for the other datasets. Table 2 presents the results. + +
Methodogbg-molhivogbg-moltox21ogbg-moltoxcastogbg-ppaogbg-molpcba
GIN75.58±1.4074.91±0.5163.41±0.7468.92±1.0022.66±0.28
GIN+VN75.20±1.3076.21±0.8266.18±0.6870.37±1.0727.03±0.23
GSN77.99±1.00----
PNA79.05±1.30---28.38±0.35
ID-GNN78.30±2.00----
Deep LRP77.19±1.40----
GraphSNN78.51±1.7075.45±1.1065.40±0.7170.66±1.6524.96±1.50
GraphSNN+VN79.72±1.8376.78±1.2767.68±0.9272.02±1.4828.50±1.68
+ +Table 3: Classification accuracy (%) averaged over 10 runs on graph classification, where $\lambda = 2$ . The results of the baselines are taken from (Hu et al., 2020) and the leaderboard of the OGB website. + +
MethodMUTAGPTC-MRPROTEINSBZRIMDB-B
GSNGSN-e90.6 ± 7.568.2 ± 7.276.6 ± 5.0-77.8 ± 3.3
GSN-v92.2 ± 7.567.4 ± 5.774.5 ± 5.0-76.8 ± 2.0
ID-GNNsID-GNN Fast96.5 ± 3.261.9 ± 5.478.0 ± 3.586.4 ± 3.0-
ID-GNN Full93.0 ± 5.662.5 ± 5.377.9 ± 2.488.1 ± 4.0-
OursGraphSNN91.57 ± 2.866.70 ± 3.776.83 ± 2.588.69 ± 3.277.86 ± 3.6
k-WL1-GNNNT82.7 ± 0.051.2 ± 0.0--69.4 ± 0.0
1-GNN82.2 ± 0.059.0 ± 0.0--71.2 ± 0.0
GNNs1-2-3-GNNNT84.4 ± 0.059.3 ± 0.0--70.3 ± 0.0
1-2-3-GNN86.1 ± 0.060.9 ± 0.0--74.2 ± 0.0
OursGraphSNN87.30 ± 3.161.63 ± 2.874.01 ± 3.282.72 ± 3.974.81 ± 3.5
+ +Table 4: Classification accuracy (\%) averaged over 10 runs on graph classification, where $\lambda = 2$ . The results of the baselines are taken from their original papers. GSN and ID-GNNs use the same experimental setup as GIN, while k-WL GNNs uses the same experimental setup as CapsGNN. These experimental setups are detailed in Appendix B. + +Experiments on large graphs. We use five large graph datasets from Open Graph Benchmark (OGB) Hu et al. (2020), including four molecular graph datasets (ogbg-molhiv, ogbg-moltox21, ogbg-moltoxcast and ogb-molpcba) and one protein-protein association network (ogbg-ppa). Table 12 in Appendix B contains statistics for these large graph datasets. + +We compare against the following methods that have reported the results on the above OGB datasets: GIN and $\mathrm{GIN + VN}$ (Hu et al., 2020), GSN (Bouritsas et al., 2020), PNA (Corso et al., 2020), ID-GNNs (You et al., 2021) and Deep LRP (Chen et al., 2020b). In addition to the original model of GraphSNN, we also consider a variant, denoted as GraphSNN+VN, which performs the message passing over augmented graphs with virtual nodes in GraphSNN (Hu et al., 2020; Ishiguro et al., 2019). + +We follow the same experiment setup as in Hu et al. (2020). We use the Adam optimizer with learning rate 0.001, batch size 32, dropout 0.5 and 100 epochs for all datasets. GraphSNN uses a 8-layer MLP with embedding dimension 512 for ogbg-moltoxcast and ogbg-moltox21, while GraphSNN+VN has the embedding dimensions 300 and 256, and 8-layer and 5-layer MLPs for ogbg-moltoxcast and ogbg-moltox21, respectively. For ogbg-molhiv, ogbg-molpcba and ogbg-ppa, both GraphSNN and GraphSNN+VN use a 5-layer MLP and embedding dimension 200. Table 3 shows the results for the classification accuracy. Table 15 in Appendix B shows the results for the running time of the preprocessing step. + +Comparison with GNNs beyond 1-WL. We compare GraphSNN with the other GNNs that are more expressive than 1-WL, including: GSN (Bouritsas et al., 2020), ID-GNNs (You et al., 2021) and k-WL GNN (Morris et al., 2019). We use the same experimental setup as in (Xu et al., 2019; Bouritsas et al., 2020; Maron et al., 2019). Table 4 shows the results. + +# 5.3 ABLATION STUDY + +We perform an ablation study to analyze the effect of $\lambda$ values on model performance. Tables 5 and 6 show that $\lambda = 1$ yields the highest performance for node classification, while $\lambda = 2$ is the best for graph classification. This reflects a critical point - different classes of structure information are needed by different graph learning tasks. $\lambda = 1$ captures local density, e.g., two overlap subgraphs may + +
DatasetMethodλ=1λ=2λ=3λ=4λ=5
CoraGraphSNNGCN83.1±1.882.8±1.382.3±2.481.8±1.682.1±1.6
GraphSNNGIN79.2±1.778.8±1.278.5±1.378.1±1.677.7±1.2
GraphSNNGraphSAGE80.5±2.580.3±2.179.8±1.979.2±1.979.4±2.2
GraphSNNGAT83.8±1.283.5±1.583.2±1.782.8±1.383.2±1.9
CiteseerGraphSNNGCN72.3±1.571.7±1.371.1±1.670.6±1.270.9±1.1
GraphSNNGIN68.3±1.568.3±1.967.7±1.467.1±1.367.3±1.4
GraphSNNGraphSAGE72.7±3.272.0±2.571.6±2.971.9±2.171.3±2.3
GraphSNNGAT73.5±1.672.9±1.772.5±1.172.6±1.672.0±1.3
+ +Table 5: Classification accuracy (%) averaged over 10 runs on node classification with standard splits. + +
DatasetMethodλ=1λ=2λ=3λ=4λ=5
MUTAG92.66±2.494.14±1.293.38±1.592.25±2.192.79±2.0
PTC-MR70.76±5.171.01±3.670.67±2.869.59±2.169.97±3.1
PROTEINS77.90±4.978.21±2.978.15±2.177.20±3.176.93±3.2
D&DGraphSNN82.70±4.684.61±1.584.34±1.282.60±2.682.30±2.3
BZR87.61±4.991.88±3.291.45±2.691.38±2.190.90±3.1
COX286.20±3.386.72±2.983.81±3.183.13±2.683.94±3.2
IMDB-B77.07±5.277.87±3.177.60±3.677.32±3.277.10±3.3
RDT-M5K59.53±2.660.23±2.260.10±2.360.00±2.159.90±2.6
+ +Table 6: Classification accuracy (%) averaged over 10 runs on graph classification with random splits. + +considerably vary in the number of vertices but their local density can be very close. Our experiments show that injecting such local density helps improve the performance of node classification. $\lambda = 2$ captures local similarity, i.e., how similar two overlap subgraphs are. Two overlap subgraphs that considerably differ in the number of vertices would have very different structural coefficients. Since graph classification requires to compare the similarity of two graphs, $\lambda = 2$ is thus the best. + +# 5.4 OVERSMOOTHING ANALYSIS + +We analyse the impact of model depth (number of layers) on node classification performance. In addition to GCN and GraphSNN $_{GCN}$ , we also compare these models with a residual connection (i.e., GCN+residual and GraphSNN $_{GCN}$ +residual). We evaluate all the models on Cora dataset using the standard splits and same hyperparameters as in Section 5.1. Table 7 shows the results. When increasing the model depth, GraphSNN $_{GCN}$ performs consistently better than GCN at each layer. This is because structural coefficients capture structural connectivity between a target vertex and its neighbors. Thus, a neighbor whose structural connectivity is weak would pass little messages to the target vertex, whereas a neighbor whose structural connectivity is strong would pass a strong message to the target vertex. GraphSNN helps alleviate the oversmoothing issue even in the presence of residual connections. Further results of the oversmoothing analysis are provided in Appendix B. + +
#LayersGCNGCN+residualGraphSNNGCNGraphSNNGCN+residual
179.6±0.580.3±0.780.1±0.881.6±1.6
281.5±0.482.8±1.283.1±1.884.1±1.7
380.3±0.682.3±0.582.0±0.883.4±0.7
478.2±0.981.5±0.980.1±0.782.9±0.9
574.3±1.381.0±1.379.1±1.282.3±0.3
635.6±1.580.6±0.576.5±1.381.5±1.2
731.6±0.979.7±0.676.3±1.380.9±0.9
816.2±1.278.4±1.175.7±1.280.3±1.3
+ +Table 7: Classification accuracy (%) averaged over 10 runs on Cora dataset. + +# 6 CONCLUSIONS + +In this paper, we have introduced a GNN framework, which enables a general way of injecting structural information into a message-passing aggregation scheme. We have also introduced a novel GNN model, GraphSNN, for graph learning, and prove that GraphSNN is more expressive than 1-WL in distinguishing graph structures. It is shown that GraphSNN consistently outperforms all the state-of-the-art approaches in both node classification and graph classification benchmark tasks. + +# REFERENCES + +Waiss Azizian and Marc Lelarge. Expressive power of invariant and equivariant graph neural networks. In International Conference on Learning Representations (ICLR), 2020. +László Babai and Ludik Kucera. Canonical labelling of graphs in linear average time. In 20th Annual Symposium on Foundations of Computer Science (SFCS), pp. 39-46, 1979. +Karsten M Borgwardt and Hans-Peter Kriegel. Shortest-path kernels on graphs. In Fifth IEEE international conference on data mining (ICDM'05), pp. 8-pp. IEEE, 2005. +Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. arXiv preprint arXiv:2006.09252, 2020. +Jin-Yi Cai, Martin Fürer, and Neil Immerman. An optimal lower bound on the number of variables for graph identification. Combinatorica, 12(4):389-410, 1992. +Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka, and Tom M Mitchell. Toward an architecture for never-ending language learning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2010. +Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the oversmoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 3438-3445, 2020a. +Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. Can graph neural networks count substructures? Advances in neural information processing systems (NeurIPS), 2020b. +Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Lio, and Petar Velicković. Principal neighbourhood aggregation for graph nets. Advances in Neural Information Processing Systems (NeurIPS), 2020. +Pim de Haan, Taco Cohen, and Max Welling. Natural graph networks. arXiv preprint arXiv:2007.08349, 2020. +Asim Kumar Debnath, Rosa L Lopez de Compadre, Gargi Debnath, Alan J Shusterman, and Corwin Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34 (2):786-797, 1991. +Simon S Du, Kangcheng Hou, Barnabás Póczos, Ruslan Salakhutdinov, Ruosong Wang, and Keyulu Xu. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. arXiv preprint arXiv:1905.13192, 2019. +Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli. A fair comparison of graph neural networks for graph classification. 2020. +Vikas Garg, Stefanie Jegelka, and Tommi Jaakkola. Generalization and representational limits of graph neural networks. In International Conference on Machine Learning (ICML), pp. 3419-3430, 2020. +Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International Conference on Machine Learning (ICML), pp. 1263-1272. PMLR, 2017. +Martin Grohe. Descriptive complexity, canonisation, and definable graph structure theory, volume 47. Cambridge University Press, 2017. +Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1024-1034, 2017. +Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in Neural Information Processing Systems (NeurIPS), 2020. + +Katsuhiko Ishiguro, Shin-ichi Maeda, and Masanori Koyama. Graph warp module: an auxiliary module for boosting the power of graph neural networks in molecular graph analysis. arXiv preprint arXiv:1902.01020, 2019. +Nicolas Keriven and Gabriel Peyré. Universal invariant and equivariant graph neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2019. +Diederick P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015. +Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017. +Nils M Kriege, Pierre-Louis Giscard, and Richard Wilson. On valid optimal assignment kernels and applications to graph classification. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1623-1631, 2016. +Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2018. +Xin Liu, Haojie Pan, Mutian He, Yangqiu Song, Xin Jiang, and Lifeng Shang. Neural subgraph isomorphism counting. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1959-1969, 2020. +Andreas Loukas. What graph neural networks cannot learn: depth vs width. In International Conference on Learning Representations (ICLR), 2020. +Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. In International Conference on Learning Representations (ICLR), 2018. +Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. Advances in Neural Information Processing Systems (NeurIPS), 2019. +Federico Monti, Karl Otness, and Michael M Bronstein. Motifnet: a motif-based graph convolutional network for directed graphs. In 2018 IEEE Data Science Workshop (DSW), pp. 225-228, 2018. +Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 4602-4609, 2019. +Christopher Morris, Gaurav Rattan, and Petra Mutzel. Weisfeiler and leman go sparse: Towards scalable higher-order graph embeddings. Advances in Neural Information Processing Systems (NeurIPS), 2020. +Ryan Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Relational pooling for graph representations. In International Conference on Machine Learning (ICML), pp. 4663-4673, 2019. +Giannis Nikolentzos, Polykarpos Meladianos, and Michalis Vazirgiannis. Matching node embeddings for graph similarity. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2017. +Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom-gcn: Geometric graph convolutional networks. In International Conference on Learning Representations (ICLR), 2020. +Bastian Rieck, Christian Bock, and Karsten Borgwardt. A persistent weisfeiler-lehman procedure for graph classification. In International Conference on Machine Learning (ICML), pp. 5448-5458. PMLR, 2019. +Ryoma Sato. A survey on the expressive power of graph neural networks. arXiv preprint arXiv:2003.04078, 2020. + +Ryoma Sato, Makoto Yamada, and Hisashi Kashima. Random features strengthen graph neural networks. In Proceedings of the 2021 SIAM International Conference on Data Mining (SDM), pp. 333-341, 2021. +Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93-93, 2008. +Nino Shervashidze, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12(9), 2011. +Jeffrey J Sutherland, Lee A O'brien, and Donald F Weaver. Spline-fitting with a genetic algorithm: A method for developing classification structure- activity relationships. Journal of chemical information and computer sciences, 43(6):1906-1915, 2003. +Vayer Titouan, Nicolas Courty, Romain Tavenard, and Rémi Flamary. Optimal transport for structured data with application on graphs. In International Conference on Machine Learning (ICML), pp. 6275-6284, 2019. +Matteo Togninalli, Elisabetta Ghisu, Felipe Llinares-López, Bastian Rieck, and Karsten Borgwardt. Wasserstein weisfeiler-lehman graph kernels. In Advances in Neural Information Processing Systems (NeurIPS), pp. 6439-6449, 2019. +Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. International Conference on Learning Representations (ICLR), 2017. +Clément Vignac, Andreas Loukas, and Pascal Frossard. Building powerful and equivariant graph neural networks with structural message-passing. In Advances in Neural Information Processing Systems (NeurIPS), 2020. +Nikil Wale, Ian A Watson, and George Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowledge and Information Systems, 14(3):347-375, 2008. +Boris Weisfeiler and Andrei Leman. The reduction of a graph to canonical form and the algebra which appears therein. NTI, Series, 2(9):12-16, 1968. +Asiri Wijesinghe and Qing Wang. Dfnets: Spectral cnns for graphs with feedback-looped filters. Advances in neural information processing systems (NeurIPS), 2019. +Jun Wu, Jingrui He, and Jiejun Xu. Demo-net: Degree-specific graph neural networks for node and graph classification. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 406-415, 2019. +Zhang Xinyi and Lihui Chen. Capsule graph neural network. In International conference on learning representations (ICLR), 2018. +Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In International Conference on Machine Learning (ICML), pp. 5453-5462. PMLR, 2018. +Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations (ICLR), 2019. +Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1365-1374, 2015. +Jiaxuan You, Jonathan Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2021. +Muhan Zhang and Yixin Chen. Weisfeiler-lehman neural machine for link prediction. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 575-583, 2017. + +Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2018a. +Zhen Zhang, Mianzhi Wang, Yijian Xiang, Yan Huang, and Arye Nehorai. Retgk: Graph kernels based on return probabilities of random walks. In Advances in Neural Information Processing Systems (NeurIPS), pp. 3964-3974, 2018b. +Lingxiao Zhao and Leman Akoglu. Pairnorm: Tackling oversmoothing in gnns. In International Conference on Learning Representations (ICLR), 2019. + +# APPENDIX + +# A. CONNECTIONS TO PREVIOUS WORK + +In the following, we discuss how our framework generalizes the existing message-passing GNNs in the literature such as GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2017) and GIN (Xu et al., 2019) as special cases. Table 8 presents the local aggregation schemes used by these existing GNN models. They differ from each other w.r.t. the way of aggregating feature vectors in a neighborhood and how they are combined with the current vertex's feature itself, i.e., summation or concatenation. Here, $\alpha_{vu}$ is an attention coefficient capturing the importance of a neighbor in GAT, $\epsilon$ is a learnable or fixed scalar parameter used in GIN, $W$ is a learnable weight matrix and $\sigma$ is a non-linear activation function, such as ReLU. + +Note that, as defined in Equation 3, $m_{a}^{(t)}$ and $m_{v}^{(t)}$ refer to the messages aggregated by AGGREGATE $^{N}(\cdot)$ and AGGREGATE $^{I}(\cdot)$ , respectively. + +
GNN ModelAGGREGATEN(·)AGGREGATEI(·)COMBINE(·)
GCN∑u∈N(v)W(t)hu(t)/√|N(u)||N(v)|W(t)hv(t)/√|N(v)||N(v)|σ(SUM(mv(t),ma(t))
GraphSAGE∑u∈N(v)h(u)(t)/|N(v)|hv(t)σ(W(t)·CONCAT(mv(t),ma(t))
GAT∑u∈N(v)αvuW(t)hu(t)αvvW(t)hv(t)σ(SUM(mv(t),ma(t))
GIN∑u∈N(v)h(u)(t)(1+ε)hv(t)MLPθ(SUM(mv(t),ma(t))
+ +# COMPLEXITY ANALYSIS + +Table 9 summarizes the time and space complexities of several popular message-passing GNNs and GraphSNN, where $n$ and $m$ are the numbers of vertices and edges in a graph, respectively, $k$ refers to the number of layers, $f$ and $d$ are the dimensions of input and output feature vectors, respectively, $a$ is the number of attention heads used in GAT, and $s$ is the number of neighbors sampled for each node at each layer in GraphSAGE. + +Table 8: Comparison of the aggregation schemes used in existing message-passing GNNs + +
GNN ModelTime ComplexityMemory Complexity
GCN (Kipf & Welling, 2017)O(kmfd)O(m)
GIN (Xu et al., 2019)O(kmfd)O(m)
GAT (Veličković et al., 2017)O(k(anfd + amd))O(n2)
GraphSAGE (Hamilton et al., 2017)O(snfd)O(n)
GraphSNN (ours)O(kmfd)O(m)
+ +Table 9: Time and space complexities of message-passing GNNs and GraphSNN. + +# FORMULATION OF GRAPHGNN $M$ + +For each of these message-passing GNNs, denoted as $M$ , we construct a variant GraphSNN $_M$ by replacing its existing aggregation scheme by our aggregation scheme with structural coefficients as formulated in Eq. 5. These variants are used in our experiments for node classification benchmark tasks (see Section 5.1) in order to evaluate how our aggregation scheme with structural coefficients can improve performance, compared with their standard message-passing aggregation schemes. Below are the details of these variants. + +# GCN and GraphSNN ${}_{GCN}$ + +Graph Convolutional Network (GCN) (Kipf & Welling, 2017) applies a normalized mean aggregation to combine the feature vector of a node $v$ with the feature vectors in its neighborhood $\mathcal{N}(v)$ : + +$$ +h _ {v} ^ {(t + 1)} = \sigma \left(\frac {W ^ {(t)} h _ {v} ^ {(t)}}{\sqrt {\left| \mathcal {N} (v) \right| \left| \mathcal {N} (v) \right|}} + \sum_ {u \in \left\{\mathcal {N} (v) \right\}} \frac {W ^ {(t)} h _ {u} ^ {(t)}}{\sqrt {\left| \mathcal {N} (v) \right| \left| \mathcal {N} (u) \right|}}\right). \tag {6} +$$ + +$\sqrt{|\mathcal{N}(u)||\mathcal{N}(v)|}$ is a normalization constant for the edge $(v,u)$ , which originates from the normalized adjacency matrix $D^{-1 / 2}AD^{-1 / 2}$ . $W^{(t)}$ is a trainable weight matrix and $\sigma$ is a non-linear activation function such as ReLU. We generalise GCN to a model under the GMP framework, namely $\mathrm{GraphSNN}_{GCN}$ , to improve the expressive power of GCN. We first construct a normalized structural coefficient matrix $\tilde{A}$ . Formally, each neural layer of $\mathrm{GraphSNN}_{GCN}$ may then be expressed as: + +$$ +h _ {v} ^ {(t + 1)} = \sigma \left(\gamma^ {(t)} \left(\sum_ {u \in \mathcal {N} (v)} \tilde {A} _ {v u} + 1\right) \frac {W ^ {(t)} h _ {v} ^ {(t)}}{\sqrt {| \tilde {\mathcal {N}} (v) | | \tilde {\mathcal {N}} (v) |}} + \sum_ {u \in \mathcal {N} (v)} \left(\tilde {A} _ {v u} + 1\right) \frac {W ^ {(t)} h _ {u} ^ {(t)}}{\sqrt {| \tilde {\mathcal {N}} (u) | | \tilde {\mathcal {N}} (v) |}}\right). \tag {7} +$$ + +# GraphSAGE and GraphSNNGraphSAGE + +GraphSAGE (Hamilton et al., 2017) learns aggregation functions to induce new node feature vectors by sampling and aggregating features from a node's local neighborhood. GraphSAGE has considered three different aggregation functions such as mean aggregator, LSTM aggregator and pooling aggregator. In our work, we mainly focus on the mean aggregator that, for each vertex $v$ , takes the mean of the feature vectors of the nodes in its neighborhood and concatenates it with the feature vector of $v$ as shown below: + +$$ +h _ {v} ^ {(t + 1)} = \sigma \left(W ^ {(t)} \cdot \operatorname {C o n c a t} \left(\frac {1}{| \mathcal {N} (v) |} \sum_ {u \in \mathcal {N} (v)} h _ {u} ^ {(t)}, h _ {v} ^ {(t)}\right)\right), \tag {8} +$$ + +where $W^{(t)}$ is a learnable weight matrix, and $\sigma$ represents a non-linear activation function. We also generalise GraphSNN to a model under the GMP framework, namely $\mathbf{GraphSNN}_{\mathbf{GraphSAGE}}$ . This model first takes a mean aggregation of the feature vectors in the neighborhood $\mathcal{N}(v)$ and then concatenates it with the feature vector of $v$ itself in the following manner: + +$$ +h _ {v} ^ {(t + 1)} = \sigma \left(W ^ {(t)} \cdot \operatorname {C O N C A T} \left(\frac {1}{| \mathcal {N} (v) |} \sum_ {u \in \mathcal {N} (v)} \left(\tilde {A} _ {v u} + 1\right) h _ {u} ^ {(t)}, \gamma^ {(t)} \left(\sum_ {u \in \mathcal {N} (v)} \tilde {A} _ {v u} + 1\right) h _ {v} ^ {(t)}\right)\right). \tag {9} +$$ + +# GAT and GraphSNNGAT + +Graph Attention Network (GAT) (Velicković et al., 2017) linearly transforms the input feature vectors and performs a weighted sum of the feature vectors for vertices in a neighborhood after the transformation. GAT computes attention weights $\alpha_{vu}^{(t)}$ using an attention mechanism and aggregates the feature vectors in a neighborhood as follows: + +$$ +h _ {v} ^ {(t + 1)} = \sigma \Big (\sum_ {(v, u) \in E} \alpha_ {v u} ^ {(t)} W ^ {(t)} h _ {u} ^ {(t)} \Big), \tag {10} +$$ + +where $W^{(t)}$ is a trainable weight matrix and $\sigma$ represents a non-linear activation function. We generalise GAT to a model, called GraphSNN $_{GAT}$ , in the GMP framework. Firstly, we aggregate the feature vectors based on structural coefficients in our aggregation scheme, i.e., we compute + +$$ +\tilde {h} _ {u} ^ {(t)} = \gamma^ {(t)} \left(\sum_ {z \in \mathcal {N} (u)} \tilde {A} _ {u z} + 1\right) \frac {h _ {u} ^ {(t)}}{\sqrt {| \tilde {\mathcal {N}} (u) | | \tilde {\mathcal {N}} (u) |}} + \sum_ {z \in \mathcal {N} (u)} \left(\tilde {A} _ {u z} + 1\right) \frac {h _ {z} ^ {(t)}}{\sqrt {| \tilde {\mathcal {N}} (z) | | \tilde {\mathcal {N}} (u) |}} \tag {11} +$$ + +and + +$$ +\tilde {h} _ {v} ^ {(t)} = \gamma^ {(t)} \left(\sum_ {z ^ {\prime} \in \mathcal {N} (v)} \tilde {A} _ {v z ^ {\prime}} + 1\right) \frac {h _ {v} ^ {(t)}}{\sqrt {| \tilde {\mathcal {N}} (v) | | \tilde {\mathcal {N}} (v) |}} + \sum_ {z ^ {\prime} \in \mathcal {N} (v)} \left(\tilde {A} _ {v z ^ {\prime}} + 1\right) \frac {h _ {z ^ {\prime}} ^ {(t)}}{\sqrt {| \tilde {\mathcal {N}} (z ^ {\prime}) | | \tilde {\mathcal {N}} (v) |}}. \tag {12} +$$ + +We then construct attention coefficients $\alpha_{vu}^{(t)}$ on these aggregated feature vectors as follows: + +$$ +\alpha_ {v u} ^ {(t)} = \frac {\exp \left(\text {L e a k y R e L U} \left(a ^ {T} \left[ W ^ {(t)} \tilde {h} _ {v} ^ {(t)} \right| | W ^ {(t)} \tilde {h} _ {u} ^ {(t)} ]\right)\right)}{\sum_ {z \in \mathcal {N} (v)} \exp \left(\text {L e a k y R e L U} \left(a ^ {T} \left[ W ^ {(t)} \tilde {h} _ {v} ^ {(t)} \right| | W ^ {(t)} \tilde {h} _ {z} ^ {(t)} ]\right)\right)}, \tag {13} +$$ + +where $||$ represents the concatenation, $W^{(t)}$ is a learnable weight matrix and $a$ is a learnable weight vector. After that, we aggregate the neighborhood features as follows using attention coefficients. + +$$ +h _ {v} ^ {(t + 1)} = \sigma \Big (\sum_ {(v, u) \in E} \alpha_ {v u} ^ {(t)} W ^ {(t)} \tilde {h} _ {u} ^ {(t)} \Big), \tag {14} +$$ + +where $W^{(t)}$ is a learnable weight matrix, and $\sigma$ represents a non-linear activation function. We use multi-head attention as stated in the original work Velicković et al. (2017). + +# GIN and GraphSNN $G_{IN}$ + +Graph Isomorphism Network (GIN) (Xu et al., 2019) takes the sum aggregation over a neighborhood, followed by a 2-layer MLP. The $\epsilon^{(t + 1)}$ is a learnable parameter or fixed scalar. Each neural layer is expressed as: + +$$ +h _ {v} ^ {(t + 1)} = \mathrm {M L P} ^ {(t + 1)} \left((1 + \epsilon^ {(t + 1)}) h _ {v} ^ {(t)} + \sum_ {u \in \mathcal {N} (v)} h _ {u} ^ {(t)}\right). \tag {15} +$$ + +Here, we consider one of GIN variants employed in the original paper, where the learnable parameter $\epsilon = 0$ , and generalise it to $\mathbf{GraphSNN}_{GIN}$ as defined below: + +$$ +h _ {v} ^ {(t + 1)} = \mathrm {M L P} ^ {(t + 1)} \left(\gamma^ {(t)} \left(\sum_ {u \in \mathcal {N} (v)} \tilde {A} _ {v u} + 1\right) h _ {v} ^ {(t)} + \sum_ {u \in \mathcal {N} (v)} \left(\tilde {A} _ {v u} + 1\right) h _ {u} ^ {(t)}\right). \tag {16} +$$ + +# B. EXPERIMENTS + +# DATASETS + +Table 10 contains the statistics for the five datasets used in our experiments for node classification in Section 5.1. + +
DatasetType#Nodes#Edges#Classes#Features
CoraCitation network2,7085,42971,433
CiteseerCitation network3,3274,73263,703
PubmedCitation network19,71744,3383500
NELLKnowledge graph65,755266,1442105,414
ogbn-arxivCitation network169,3431,166,24340128
+ +Table 10: Statistics for node classification datasets. + +Table 11 below contains the statistics for the datasets used in our experiments on small graph classification in Section 5.2, as well as the datasets used in an additional experiment for graph classification following the data splits and experimental setup in (Errica et al., 2020). The results of this additional experiment are reported under Section "Graph Classification using Setup (Errica et al., 2020)" in Appendix B. + +Table 12 contains the statistics for the five large graph datasets from Open Graph Benchmark (OGB) Hu et al. (2020), used in our experiments for large graph classification in Section 5.2. + +# EXPERIMENTAL SETUP ON SMALL GRAPHS + +Previously, several experimental setups have been considered for evaluating graph classification on small graphs in TUD benchmark datasets (https://chrsmrrs.github.io/datasets/). All the baseline methods in our paper use the 10-fold cross validation technique. However, they differ in how they split training/validation/testing data and how they report the final results in terms of classification accuracy. Below, we discuss the details of their experimental setups. + +
Dataset#GraphsAvg # NodesAvg # Edges#Classes
MUTAG18817.9319.792
PTC-MR34414.2914.692
BZR40535.7538.362
COX246741.2243.452
ENZYMES60032.6364.146
IMDB-B100019.7796.532
PROTEINS111339.0672.822
D & D1178284.32715.662
NCI1411029.8732.302
RDT-M5K5000508.52594.875
COLLAB500074.492457.783
+ +Table 11: Statistics for small graph classification datasets. + +
Dataset#GraphsAvg # NodesAvg # Edges#TasksTask Type
ogbg-molmholiv41,12725.527.51Binary classification
ogbg-moltox217,83118.619.312Binary classification
ogbg-moltoxcast8,57618.819.3617Binary classification
ogbg-molpcba437,92926.028.1128Binary classification
ogbg-ppa158,100243.42,266.11Multi-class classification
+ +Table 12: Statistics for large graph classification dataset (OGB graph datasets). + +- CapsGNN (Xinyi & Chen, 2018) splits the datasets into $80\%$ for training, $10\%$ for validation, and $10\%$ for testing. The training is stopped when the performance on the validation set goes to the highest. Then they obtain the test set accuracy that corresponds to the epoch with the highest validation accuracy in each fold. The final results are reported by computing the mean accuracy and standard deviation over 10 folds. +- DGCNN Zhang et al. (2018a) splits the datasets into $90\%$ for training and $10\%$ for testing. They obtain the test accuracy of the last epoch in each fold. They report the final results by computing the mean accuracy and standard deviation on the test accuracy over 10 folds. +- GIN and GraphSAGE (Xu et al., 2019) split the datasets into $90\%$ for training and $10\%$ for testing. They average the test accuracy on 10 folds and select the epoch with the highest averaged accuracy. Then they report the final results by computing the mean accuracy and standard deviation based on the selected epoch. +- FGW (Titouan et al., 2019) splits the datasets into $90\%$ for training and $10\%$ for testing. Then, they use the nested cross validation technique on the same folds, and repeat the process 10 times. They report the final results by computing the mean accuracy and standard deviation. +- The other baseline methods split the datasets into $90\%$ for training and $10\%$ for testing, and repeat their experiment 10 times. Then they report the final results by computing the mean accuracy and standard deviation. + +In our work, we split the datasets into $90\%$ for training and $10\%$ for testing. We obtain the best validation accuracy on each fold. Then we report the final results by computing the mean accuracy and standard deviation over 10 folds1. + +# NODE CLASSIFICATION USING RANDOM Splits + +Following the work Pei et al. (2020), we randomly split graph nodes into $60\%$ , $20\%$ and $20\%$ for training, validation and testing, respectively. The other hyperparameter settings are the same as in Section 5.1. Table 13 shows the results. We see that our models consistently outperform all of the baseline methods on all benchmark datasets. Specifically, $\mathrm{GraphSN}_{GCN}$ improves upon GCN by a margin of $1.5\%$ , $1.7\%$ , $1.6\%$ and $2.4\%$ on Cora, Citeseer, Pubmed and NELL, respectively. + +GraphSNGAT improves upon GAT by $1.3\%$ , $1.6\%$ and $2.0\%$ on Cora, CiteSeer and Pubmed, respectively. GraphSNGIN improves upon GIN by $3.8\%$ , $1.7\%$ , $1.8\%$ and $1.6\%$ on Cora, CiteSeer, Pubmed and NELL, respectively. GraphSNGraphSAGE improves upon GraphSAGE by $1.3\%$ , $1.7\%$ , $1.1\%$ and $2.3\%$ on Cora, CiteSeer, Pubmed and NELL, respectively. + +
MethodCoraCiteseerPubmedNELL
GCN85.7 ± 1.673.6 ± 1.088.1 ± 1.272.2 ± 5.6
GraphSNNGCN87.2 ± 1.575.3 ± 1.389.7 ± 1.774.6 ± 6.3
GAT86.3 ± 0.374.3 ± 0.387.6 ± 0.1-
GraphSNNGAT87.6 ± 0.975.9 ± 0.889.6 ± 0.6-
GIN82.5 ± 0.870.8 ± 1.985.0 ± 1.566.7 ± 3.3
GraphSNNGIN86.3 ± 0.772.5 ± 1.586.8 ± 1.268.3 ± 3.7
GraphSAGE86.8 ± 1.974.2 ± 1.888.3 ± 1.169.4 ± 4.3
GraphSNNGraphSAGE88.1 ± 1.575.9 ± 1.389.4 ± 2.471.7 ± 4.5
+ +# GRAPH CLASSIFICATION USING SETUP (ERRICA ET AL., 2020) + +Following the data splits and experiment setup introduced in (Errica et al., 2020), we further evaluate our method. The experimental setup in (Errica et al., 2020) provides a fair performance comparison process on GNN methods. The evaluation process has two different phases: (1) model selection on the validation set, (2) model assessment on the test set. More specifically, they first split the datasets into $90\%$ for training and $10\%$ for testing. Then the entire training set is further split into $90\%$ of training and $10\%$ of validation. They apply the inner hold-out method to select the best model based on validation accuracy. After selecting the best model, they train the model three times on the entire training set with early stopping. + +We have conducted experiments on four bioinformatics datasets (NCI1, PROTEINS, ENZYMES and D&D) and three social network datasets (COLLAB, IMDB-B and REDDIT-5k) with node features. The results of the baseline, DGCNN and GIN are taken from the paper (Errica et al., 2020). Note that the final results of DGCNN and GIN from the paper (Errica et al., 2020) are reported by computing the mean accuracy and standard deviation on the test set in these three runs, which are different from the original papers of DGCNN and GIN. Table 14 shows the results. + +Table 13: Classification accuracy (%) averaged over 10 random splits on node classification. + +
MethodNCI1PROTEINSENZYMESD&DCOLLABIMDB-BREDDIT-5k
Baseline69.8±2.275.8 ± 3.765.2±6.478.4 ± 4.570.2±1.570.8±5.052.2±1.5
DGCNN76.4±1.772.9±3.538.9±5.776.6±4.371.2±1.969.2±3.049.2±1.2
GIN80.0±1.473.3±4.059.6±4.575.3±2.975.6±2.371.2±3.956.1±1.7
GraphSNN81.6 ± 2.874.5 ± 3.561.7 ± 3.477.1 ± 3.377.0 ± 3.172.3 ± 3.657.1 ± 3.1
+ +Table 14: Classification accuracy (%) averaged over 10 runs on graph classification. + +# GRAPH CLASSIFICATION ON OGB GRAPH DATASETS + +Table 15 shows the results for the running time of the preprocessing step in our method GraphSNN for large graph datasets (averaged over 5 runs). Note that the preprocessing step can be parallelized efficiently at the node level. The CPU time shows the total preprocessing time of a dataset in which each node is preprocessed sequentially, and the CPU time per node shows the average preprocessing time per node. + +# OVERSMOOTHING ANALYSIS + +We have also conducted further experiments to analyze the effectiveness of our method in alleviating the over-smoothing issue. We compare GIN (i.e., a spatial GNN), DFNets (Wijesinghe & Wang, 2019) (i.e., a spectral GNN), GraphSNN $_{GIN}$ and GraphSNN $_{GCN}$ . For a fair comparison, we remove the dense-net architecture of DFNets and use the same hyperparameters as in the original paper. We + +
DatasetCPU time (seconds)CPU time per node (milliseconds)
ogbg-molhiv66.970.06383
ogbg-moltox2179.370.54565
ogbg-moltoxcast380.842.36417
ogbg-ppa820.124.71235
+ +evaluate all models over the cora dataset using the standard splits. The classification accuracy is averaged over 10 runs on node-classification. + +Table 15: Running time of the preprocessing step for large graph datasets averaged over 5 runs. + +
#LayersGINGraphSNNGINDFNetGraphSNNGCN
173.3±1.576.1±1.680.5±0.680.1±0.8
277.6±1.379.2±1.781.9±0.583.1±1.8
375.2±1.778.5±1.382.6±0.382.0±0.8
448.6±2.177.2±2.380.7±0.680.1±0.7
540.3±1.975.9±2.175.6±0.379.1±1.2
636.1±2.373.3±1.865.3±1.376.5±1.3
727.5±2.171.9±1.560.9±1.576.3±1.3
820.3±1.869.3±2.253.6±1.375.7±1.2
+ +Table 16: Oversmoothing analysis of GIN and spectral GNN (DFNet) on cora dataset. + +GraphSNN can alleviate over-smoothing is because structural coefficients capture structural connectivity between a target vertex and its neighbors. Thus, a neighbor whose structural connectivity is weak would pass little message to the target vertex, whereas a neighbor whose structural connectivity is strong would pass strong message to the target vertex. + +Figure 4 shows the results of GCN and GraphSNN $_{GCN}$ on the datasets Cora, Citeseer and Pubmed, in terms of classification accuracy averaged over 10 runs in the setting of standard splits. + +![](images/4acd5791b362dbc4e7ef64a4b31111d8ad1a87e935c4980894ff9f161342fcb5.jpg) +Figure 4: Oversmoothing analysis w.r.t. the model depth for node classification. + +![](images/4a3fea097117abbf01e9b0a3d07dfbcc276adadece7b45a267cc824eadd64036.jpg) + +![](images/f06f91480ab31ec005a06e72558203632767fa69685066c5cc79a5bef83b7e99.jpg) + +# ABLATION STUDY WITH AUGMENTED NODE FEATURES + +We consider an experimental evaluation setup called BL, which serves as the baseline for all experiments in this ablation study. In the setting of BL, the AGGREGATE in GraphSNN is set to 1. Then, different variants of BL consider different local substructure counts as additional node features. This allows us to analyse what types of local substructures our proposed architecture can distinguish. + +There are five variants of BL being considered in the ablation study: + +(1) $\mathrm{BL}_{SC}$ : Setting AGGREGATION of GraphSNN to 1 and keeping structural coefficients for neighbors. +(2) $\mathrm{BL}_{NF}^{clique}$ : Setting AGGREGATION of GraphSNN to 1, removing structural coefficients for neighbors, and adding additional node features (triangle and 4-clique counts) into the original feature vectors. + +
MethodGSN-v\(BL_{NF}^{clique}\)\(BL_{SC}^{9}\)\(BL_{SC+NF}^{clique}\)GraphSNN
MUTAG92.20±7.590.21±2.394.06±2.495.16±2.594.70±1.9
PTC-MR67.40±5.767.13±2.970.18±3.171.04±3.170.58±3.1
PROTEINS74.59±5.076.42±2.678.05±2.378.66±2.178.42±2.7
BZR-86.82±3.190.67±3.191.98±3.291.12±3.0
IMDB-B76.80±2.077.00±3.177.23±2.878.53±2.978.01±2.8
+ +Table 17: Analysis the effects of our structural coefficients with substructure counts, i.e., triangle and 4-clique counts. Classification accuracy (\%) averaged over 10 runs on graph classification. + +
MethodID-GNN\(BL_{NF}^{cycle}\)\(BL_{SC}^{cycle}\)\(BL_{SC+NF}^{cycle}\)GraphSNN
MUTAG96.50±3.291.36±2.194.06±2.496.61±2.394.70±1.9
PTC-MR61.90±5.467.57±3.370.18±3.171.76±3.270.58±3.1
PROTEINS78.00±3.577.26±2.578.05±2.378.95±2.578.42±2.7
BZR86.40±3.086.83±3.390.67±3.191.75±3.491.12±3.0
IMDB-B-76.36±2.677.23±2.878.58±2.478.01±2.8
+ +Table 18: Analysis the effects of our structural coefficients with substructure counts, i.e., cycle counts. Classification accuracy (\%) averaged over 10 runs on graph classification. + +(3) $\mathbf{BL}_{SC + NF}^{clique}$ : Setting AGGREGATION of GraphSNN to 1, keeping structural coefficients for neighbors, and adding additional node features (triangle and 4-clique counts) into the original feature vectors. +(4) $\mathbf{BL}_{NF}^{cycle}$ : Setting AGGREGATION $^I$ of GraphSNN to 1, removing structural coefficients for neighbors, and adding additional node features (cycle counts) into the original feature vectors. +(5) $\mathbf{BL}_{SC + NP}^{cycle}$ : Setting AGGREGATION of GraphSNN to 1, keeping structural coefficients for neighbors, and adding additional node features (cycle counts) into the original feature vectors. + +We compare GraphSNN with GSN-v (Bouritsas et al., 2020), $\mathrm{BL}_{NF}^{clique}$ , $\mathrm{BL}_{SC}$ , and $\mathrm{BL}_{SC + NF}^{clique}$ to analyze how our proposed architecture relates to the models with triangle and 4 clique counts as additional node features. Similarly, we compare GraphSNN with ID-GNNs (You et al., 2021), $\mathrm{BL}_{NF}^{cycle}$ , $\mathrm{BL}_{SC}$ , and $\mathrm{BL}_{SC + NF}^{cycle}$ to analyze how our proposed architecture relates to the models with cycle counts as additional node features. We concatenate the counts of cycles with length 1 to 4 starting and ending at the given source node with its original feature vector as in (You et al., 2021). Table 17 and Table 18 show the experimental results. As AGGREGATE is set to 1 in the setting of BL, the performance gap between $\mathrm{BL}_{NF}$ and $\mathrm{BL}_{SC + NF}$ reflects the effectiveness of structural coefficients on enhancing relational inference between a target vertex and its neighbors. The performance gap between $\mathrm{BL}_{SC}$ and GraphSNN above shows the effectiveness of AGGREGATE in our proposed model GraphSNN. Furthermore, $\mathrm{BL}_{SC + NF}$ consistently performs best since we incorporate both extra node features and structural coefficients into the feature aggregation. There is a small performance gap between $\mathrm{BL}_{SC + NF}$ and GraphSNN due to augmented node features that can capture additional structural information that cannot be captured using structural coefficients. + +# C. PROOFS FOR LEMMAS AND THEOREMS + +# Proof for Theorem 1 + +Theorem 1. The following statements are true: (a) If $S_i \simeq_{subgraph} S_j$ , then $S_i \simeq_{overlap} S_j$ ; but not vice versa; (b) If $S_i \simeq_{overlap} S_j$ , then $S_i \simeq_{subtree} S_j$ ; but not vice versa. + +Proof. In the following, we prove the statements in this theorem one by one. + +For Statement (a), by $S_{i} \simeq_{subgraph} S_{j}$ and Definition 1, we know that there exists a bijective mapping $g': \tilde{\mathcal{N}}(i) \to \tilde{\mathcal{N}}(j)$ such that for the vertex $i$ and any vertex $v' \in \mathcal{N}(i)$ , $i$ and $v'$ are adjacent in $S_{i}$ iff $j = g(i)$ and $u' = g(v')$ are adjacent in $S_{j}$ , and $h_{i} = h_{j}$ and $h_{v'} = h_{u'}$ , where $g$ is a bijective mapping between $S_{i}$ and $S_{j}$ as defined by Definition 1. Then for each pair of overlap subgraphs $S_{iv'}$ + +and $S_{ju'}$ , we can further extend $g'$ along $g$ on $S_{iv'}$ and $S_{ju'}$ . That is, $g'(v) = u$ iff $g(v) = u$ . If $v$ in $S_{iv'}$ , by the definition of overlap subgraph, $v$ must either be $i$ or a neighbor of $i$ . Hence $u = g'(v)$ in this case must be either $j$ or a neighbor of $j$ . By the definition of $g$ and the fact that $g'(v) = u$ iff $g(v) = u$ , we know that for any two vertices $v_1$ and $v_2$ in $S_{iv'}$ , they are adjacent in $S_{iv'}$ iff their corresponding vertices $g'(v_1)$ and $g'(v_2)$ are adjacent in $S_{ju'}$ and their corresponding feature vectors are indistinguishable, i.e., $S_{iv'} \simeq_{subgraph} S_{ju'}$ for any $v' \in \mathcal{N}(i)$ and $g(v') = u'$ . Conversely, if $S_i \simeq_{overlap} S_j$ , then it is possible that $S_i \neq_{subgraph} S_j$ as shown by the two graphs in Figure 2(a). + +For Statement (b), if $S_i \simeq_{\text{overlap}} S_j$ , then to prove $S_i \simeq_{\text{subtree}} S_j$ we need to show that there exists a bijective mapping $g: \tilde{\mathcal{N}}(i) \to \tilde{\mathcal{N}}(j)$ such that $g(i) = j$ and, for any $v' \in \tilde{\mathcal{N}}(i)$ and $g(v') = u'$ , the feature vectors of $v'$ and $u'$ are indistinguishable, i.e., $h_{v'} = h_{u'}$ . By Def. 2, we can find a bijective mapping $g': \tilde{\mathcal{N}}(i) \to \tilde{\mathcal{N}}(j)$ such that $g'(i) = j$ and, for any $v' \in \mathcal{N}(i)$ and $g'(v') = u'$ , $S_{iv'}$ and $S_{ju'}$ are subgraph-isomorphic. This implies that $g'$ cannot distinguish the feature vectors of $v'$ and $u'$ for any $v' \in \tilde{\mathcal{N}}(i)$ and $g(v') = u'$ . Similarly, the converse does not necessarily hold and one counterexample is the set of graphs as shown in Figure 2(b) which are subtree-isomorphic but not overlap-isomorphic. + +# Proof for Theorem 2 + +Theorem 2. Let $M$ be a GNN. $M$ is as powerful as 1-WL in distinguishing non-isomorphic graphs if $M$ has a sufficient number of layers and each layer can map any $S_{i}$ and $S_{j}$ in $S$ into two different embeddings (i.e., $\zeta(S_{i}) \neq \zeta(S_{j})$ ) if and only if $S_{i} \not\cong_{\text{subtree}} S_{j}$ . + +Proof. We first show that, for any two graphs $G_{1}$ and $G_{2}$ , if they can be distinguished by 1-WL, then they must be distinguishable by such a GNN $M$ as well. Suppose that 1-WL takes $k$ iterations to distinguish $G_{1}$ and $G_{2}$ , i.e., 1-WL yields the same multiset of node labels on $G_{1}$ and $G_{2}$ in the iterations from 0 to $k - 1$ , but two different multisets of node labels on $G_{1}$ and $G_{2}$ in the $k$ -th iteration. To derive a contradiction, we assume that a GNN $M$ that satisfies the above two conditions cannot distinguish $G_{1}$ and $G_{2}$ in the iterations from 0 to $k$ . Since 1-WL can distinguish $G_{1}$ and $G_{2}$ in the $k$ -th iteration, it means that there must exist two neighborhood subgraphs, say $S_{i}$ and $S_{j}$ , which correspond to two different multisets of node labels on $G_{1}$ and $G_{2}$ at the $k$ -th iteration. These two different multisets of node labels correspond to two different multisets of feature vectors in their neighborhoods, i.e., $\{\{h_v | v \in \mathcal{N}(i)\}\} \neq \{\{h_u | u \in \mathcal{N}(j)\}\}$ . By Def. 3, we know that $S_{i} \neq_{\text{subtree}} S_{j}$ . Then this means that $\zeta(S_{i}) \neq \zeta(S_{j})$ , which contradicts the assumption that $M$ cannot distinguish $G_{1}$ and $G_{2}$ in the iteration $k$ . + +Now, we show the other direction that, for any two graphs $G_{1} = (V_{1},E_{1})$ and $G_{2} = (V_{2},E_{2})$ , if they can be distinguished by such a GNN $M$ , then they must be distinguishable by 1-WL. Similarly, suppose that at the k-th iteration, $M$ maps the neighborhood subgraphs of these two graphs into two different multisets of node embeddings, i.e., $\{\{\zeta(S_v)|v\in V_1\}\} \neq \{\{\zeta(S_u)|v\in V_2\}\}$ . This is means that we can find at least two different neighborhood subgraphs $S_{i}$ and $S_{j}$ such that $\zeta(S_{i}) \neq \zeta(S_{j})$ . For such neighborhood subgraphs $S_{i}$ and $S_{j}$ , we know that $S_{i} \not\cong_{subtree} S_{j}$ . Then this means that $S_{i}$ and $S_{j}$ correspond to either $h_{i} \neq h_{j}$ or $\{\{h_v|v\in \mathcal{N}(i)\}\} \neq \{\{h_u|u\in \mathcal{N}(j)\}\}$ , which can be relabeled by 1-WL into two different new labels. Thus, 1-WL can also distinguish such neighborhood subgraphs, and accordingly distinguish $G_{1}$ and $G_{2}$ . + +The proof is completed. + +![](images/140f130ed38a035937484e2adb95ecdef4cf0317219371c0b35ffd31ad48ebd3.jpg) + +# Proof for Theorem 3 + +Theorem 3. Let $M$ be a GNN whose aggregation scheme $\Phi$ is defined by Eq. 1-Eq. 3. $M$ is strictly more expressive than 1-WL in distinguishing non-isomorphic graphs if $M$ has a sufficient number of layers and also satisfies the following conditions: + +(1) $M$ can distinguish at least two neighborhood subgraphs $S_{i}$ and $S_{j}$ with $S_{i} \simeq_{\text{subtree}} S_{j}$ , $S_{i} \not\simeq_{\text{subgraph}} S_{j}$ and $\{\{\tilde{A}_{iv'}|v' \in \mathcal{N}(i)\}\} \neq \{\{\tilde{A}_{ju'}|u' \in \mathcal{N}(j)\}\}$ ; +(2) $\Phi\left(h_v^{(t)}, \{\{h_u^{(t)}|u \in \mathcal{N}(v)\}\}, \{\{(\tilde{A}_{vu}, h_u^{(t)})|u \in \mathcal{N}(v)\}\}\right)$ is injective. + +Proof. We prove this theorem in two steps. First, we prove that a GNN $M$ satisfying the above conditions can distinguish any two graphs that are distinguishable by 1-WL by contradiction. Assume that there exist two graphs $G_{1}$ and $G_{2}$ which can be distinguished by 1-WL but cannot be distinguished by $M$ . Further, suppose that 1-WL cannot distinguish these two graphs in the iterations from 0 to $k - 1$ , but can distinguish them in the $k$ -th iteration. Then, there must exist two neighborhood subgraphs $S_{i}$ and $S_{j}$ whose neighboring nodes correspond to two different multisets of node labels at the $k$ -th iteration, i.e., $\{h_v^{(k)}|v\in \mathcal{N}(i)\} \neq \{h_u^{(k)}|u\in \mathcal{N}(j)\}$ . By the above condition (2), we know that $\Phi$ is injective. Thus, for $S_{i}$ and $S_{j}$ , $\Phi$ would yield two different feature vectors at the $k$ -th iteration. This means that $M$ can also distinguish $G_{1}$ and $G_{2}$ , which contradicts the assumption. Our proof in the first step is done. For the second step, we can prove that there exist at least two graphs that can be distinguished by $M$ but cannot be distinguished by 1-WL. Figure 1 presents two of such graphs. + +# Proof for Theorem 4 + +We consider that, for each vertex in a graph, its node features are from a countable set; similarly, for each pair of adjacent vertices in a graph, its structural coefficient is also from a countable set. Assume that $\mathcal{H}$ , $\mathcal{A}$ and $\mathcal{W}$ are countable sets where $\mathcal{H}$ is a node feature space, $\mathcal{A}$ is a structural coefficient space, and $\mathcal{W} = \{A_{ij}h_i|A_{ij}\in \mathcal{A},h_i\in \mathcal{H}\}$ . Let $H$ and $W$ be two multisets containing elements from $\mathcal{H}$ and $\mathcal{W}$ , respectively, and $|H| = |W|$ . + +Lemma 1. There exists a function $f$ s.t. $\pi(H, W) = \sum_{h \in H, w \in W} f(h, w)$ is unique for any distinct pair of multisets $(H, W)$ . + +Proof. Since $\mathcal{H}$ and $\mathcal{W}$ are countable, there must exist two functions $\psi_1: \mathcal{H} \to \mathbb{N}_{odd}$ mapping $h \in \mathcal{H}$ to odd natural numbers and $\psi_2: \mathcal{W} \to \mathbb{N}_{even}$ mapping $w \in \mathcal{W}$ to even natural numbers. Further, for any pair of multisets $(H, W)$ , since the cardinality of $H$ and $W$ is bounded, there must exist a number $N \in \mathbb{N}$ such that $|H| < N$ and $|W| < N$ . Thus, we can find a prime number $P > 2N$ . Then we have a mapping $f$ as $f(h, w) = P^{-\psi_1(h)} + P^{-\psi_2(w)}$ such that $\sum_{h \in H, w \in W} f(h, w)$ is unique for each distinct pair of $(H, W)$ . + +Lemma 2. There exists a function $f$ s.t. $\pi'(h_v, H, W) = \gamma f(h_v, |H| h_v) + \sum_{h \in H, w \in W} f(h, w)$ is unique for any distinct $(h_v, H, W)$ , where $h_v \in \mathcal{H}$ , $|H| h_v \in \mathcal{W}$ , and $\gamma$ can be an irrational number. + +Proof. As $h_v \in \mathcal{H}$ and $|H|h_v \in \mathcal{W}$ , we may have $f(h_v, |H|h_v) = P^{-\psi_1(h_v)} + P^{-\psi_1(|H|h_v)}$ where $\psi_1: \mathcal{H} \to \mathbb{N}_{odd}$ and $\psi_2: \mathcal{W} \to \mathbb{N}_{even}$ as defined in the proof for Lemma 1. Let $(h_{v1}, H_1, W_1)$ and $(h_{v2}, H_2, W_2)$ be two different tuples. Then, there are two cases: + +(1) When $h_{v1} = h_{v2}$ but $(H_1, W_1) \neq (H_2, W_2)$ , by Lemma 1, we know that $\sum_{h \in H_1, w \in W_1} f(h, w) \neq \sum_{h \in H_2, w \in W_2} f(h, w)$ . Thus, $\pi'(h_{v1}, H_1, W_1) \neq \pi'(h_{v2}, H_2, W_2)$ . +(2) When $h_{v1} \neq h_{v2}$ , we prove $\pi'(h_{v1}, H_1, W_1) \neq \pi'(h_{v2}, H_2, W_2)$ by contradiction. Assume that $\pi'(h_{v1}, H_1, W_1) = \pi'(h_{v2}, H_2, W_2)$ . Then, we have: + +$$ +\gamma f (h _ {v 1}, | H _ {1} | h _ {v 1}) + \sum_ {h \in H _ {1}, w \in W _ {1}} f (h, w) = \gamma f (h _ {v 2}, | H _ {2} | h _ {v _ {2}}) + \sum_ {h \in H _ {2}, w \in W _ {2}} f (h, w). +$$ + +This gives us the following equation: + +$$ +\gamma \Big (f (h _ {v 1}, | H _ {1} | h _ {v 1}) - f (h _ {v 2}, | H _ {2} | h _ {v 2}) \Big) = \Big (\sum_ {h \in H _ {2}, w \in W _ {2}} f (h, w) \Big) - \Big (\sum_ {h \in H _ {1}, w \in W _ {1}} f (h, w) \Big). +$$ + +When $\gamma$ is an irrational number, L.H.S. of the above equation is irrational but R.H.S. is rational. There is a contradiction. Thus, $\pi'(h_{v1}, H_1, W_1) \neq \pi'(h_{v2}, H_2, W_2)$ . + +□ + +Based on Lemma 1 and Lemma 2, we can prove the following theorem. + +Theorem 4. GraphSNN is more expressive than 1-WL in testing non-isomorphic graphs. + +Proof. We prove this theorem by showing that GraphSNN is a GNN satisfying the conditions stated in Theorem 3. For the first condition, consider the two graphs shown in Figure 1. GraphSNN can distinguish these two neighborhood subgraphs $S_{i}$ and $S_{j}$ with $\{\{\tilde{A}_{iv'}|v'\in \mathcal{N}(i)\} \} \neq \{\{\tilde{A}_{ju'}|u'\in \mathcal{N}(j)\} \}$ . For the second condition, by Lemmas 1 and 2 as well as the fact that MLP as a universal approximator (Xu et al., 2019) can be used to model and learn the functions $f$ and $g$ , we know that GraphSNN also satisfies this condition. \ No newline at end of file diff --git a/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/images.zip b/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..705722542fabf16fbad53f1c9ba33ce7fde4e9ee --- /dev/null +++ b/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5b07a0904b01cbac3589db25af21fc24ffa41382b6c64f2b5c2fe178e03bf35f +size 1196067 diff --git a/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/layout.json b/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..56721c434d8e936fa5df460a08b175aa02d2290a --- /dev/null +++ b/anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:629bd5d5e494dc2bdeae3b321b4c07effeac2f0831e0c5549c23b694f62e6145 +size 1024088 diff --git a/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_content_list.json b/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..79a7ae1405a0cc9c8318981689e8d85cc3faf538 --- /dev/null +++ b/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d122638af7471cda1c776ae4f87037a880d8d0112d839cccc90c77635056af9d +size 133970 diff --git a/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_model.json b/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..7a6a664f4e0d73aa80cd0bf57255810609b370eb --- /dev/null +++ b/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b323cea79820603d69ef6d550bfd7281a7487551ff78ab1ba6e8579f485690b8 +size 161875 diff --git a/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_origin.pdf b/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..dee13bbdbde403a69ab4a7bd00c3b957495c9b11 --- /dev/null +++ b/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a059e78d8443348afaecad08d17003cee1d59ed0a30c718cc7e0acbd37fda9f8 +size 1033813 diff --git a/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/full.md b/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8aa2f3c9428ff0865a2f1e39636fecc69c4c4607 --- /dev/null +++ b/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/full.md @@ -0,0 +1,447 @@ +# ASYMMETRY LEARNING FOR COUNTERFACTUAL-INVARIANT CLASSIFICATION IN OOD TASKS + +# S Chandra Mouli + +Department of Computer Science + +Purdue University + +chandr@purdue.edu + +# Bruno Ribeiro + +Department of Computer Science + +Purdue University + +ribeiro@cs.purdue.edu + +# ABSTRACT + +Generalizing from observed to new related environments (out-of-distribution) is central to the reliability of classifiers. However, most classifiers fail to predict label $Y$ from input $X$ when the change in environment is due a (stochastic) input transformation $T^{\mathrm{te}} \circ X'$ not observed in training, as in training we observe $T^{\mathrm{tr}} \circ X'$ , where $X'$ is a hidden variable. This work argues that when the transformations in train $T^{\mathrm{tr}}$ and test $T^{\mathrm{te}}$ are (arbitrary) symmetry transformations induced by a collection of known $m$ equivalence relations, the task of finding a robust OOD classifier can be defined as finding the simplest causal model that defines a causal connection between the target labels and the symmetry transformations that are associated with label changes. We then propose a new learning paradigm, asymmetry learning, that identifies which symmetries the classifier must break in order to correctly predict $Y$ in both train and test. Asymmetry learning performs a causal model search that, under certain identifiability conditions, finds classifiers that perform equally well in-distribution and out-of-distribution. Finally, we show how to learn counterfactually-invariant representations with asymmetry learning in two simulated physics tasks and six image classification tasks. + +# 1 INTRODUCTION + +A significant challenge in classification tasks happens when the test distribution differs from the training distribution (i.e., the task requires out-of-distribution (OOD) generalization), since not accounting for the distribution shift can lead to poor generalization accuracy (Geirhos et al., 2020; Hu et al., 2020; Koh et al., 2020; D'Amour et al., 2020). If the learner sees examples from the test distribution, finding a classifier invariant to the distribution shift can still be a data-driven task (e.g., classical domain adaptation Ben-David et al. (2007); Muandet et al. (2013); Zhao et al. (2019)). This includes cases such as invariant risk minimization (Arjovsky et al., 2019) and its generalizations (Bellot & van der Schaar, 2020), where the training data and the test data distributions overlap in a way that can be exploited by data-driven algorithms (Creager et al., 2021; Krueger et al., 2021; Rosenfeld et al., 2020). + +However, if the learner sees no examples from the test distribution, the task is not purely data-driven and requires assumptions about the data generation process. More formally, our work considers general OOD tasks with training distribution $P(Y^{\mathrm{tr}}, X^{\mathrm{tr}})$ , where $X^{\mathrm{tr}} \coloneqq T^{\mathrm{tr}} \circ X^{\dagger}$ , with $X^{\dagger}$ as a hidden variable with distribution $P(X^{\dagger})$ and $T^{\mathrm{tr}} \in \mathcal{T}$ is a random input transformation in training $T^{\mathrm{tr}}: \mathcal{X} \to \mathcal{X}$ , where $t \circ x$ is the application of transformation $t \in \mathcal{T}$ on $x \in \mathcal{X}$ . The difference between train and test is a change in input transformation with $Y^{\mathrm{te}} \coloneqq Y^{\mathrm{tr}}$ and $X^{\mathrm{te}} \coloneqq T^{\mathrm{te}} \circ X^{\dagger}$ , where $P(T^{\mathrm{tr}}) \neq P(T^{\mathrm{te}})$ . We are interested in learning an invariant classifier that generalizes well in held out examples from the training and test distributions. + +The definition of transformation matters in this task. We first seek to generalize the existing literature on transformation invariances, e.g. (Shawe-Taylor, 1993; Kondor & Trivedi, 2018; Finzi et al., 2021; Maron et al., 2018; Murphy et al., 2019b; Mouli & Ribeiro, 2021; Bronstein et al., 2017). Our transformations are tied to equivalence relations rather than transformation groups, which frees them from the need to have inverses (in order to form a transformation group). Our transformations may not have inverses. + +We also explain why the task of learning an invariant OOD classifier is not, in general, solvable via traditional data augmentation. Before we continue describing our OOD learning task, it is important to clarify the connection between Pearl's causal hierarchy and invariant representation learning. + +Pearl's causal hierarchy and invariant representation learning. Pearl's causal hierarchy (Pearl & Mackenzie, 2018; Bareinboim et al., 2020)) has three layers: Observational (Layer 1), interventional (Layer 2), and counterfactual (Layer 3). Upper layers can perform lower layer tasks, but not vice-versa (see Bareinboim et al. (2020)). Tasks should be described using the lowest layer that can solve them. + +Layer 1: Any task that can be performed without constraints on the causal model, i.e., by data alone, is observational (Layer 1). Traditional domain adaptation is a Layer 1 task. Note that a classifier that performs well OOD is itself a Layer 1 classifier, since it tries to predict $P(Y^{\mathrm{te}}|X^{\mathrm{te}})$ . + +Layer 2: Without observations from $P(X^{\mathrm{te}})$ and/or $P(Y^{\mathrm{te}}|X^{\mathrm{te}})$ , learning an OOD classifier requires some assumptions about the data generation process (Layers 2 or 3 assumptions). Data augmentation is traditionally an interventional task (Layer 2), with new interesting methods increasingly using causal language (Ilse et al., 2021; Teney et al., 2020). For instance, in a task predicting an image's foreground, knowing how to act on an image in training $X^{\mathrm{tr}}$ to change the background seen in training to the backgrounds seen in test $X^{\mathrm{te}} = T \circ X^{\mathrm{tr}}$ with a transformation $T$ , implies we know how to predict $P(Y|X,do(T))$ . + +Layer 3: Counterfactuals are the most challenging task. We start our description with an example. Consider a random continuous transformation $T_2^{\mathrm{tr}}$ (in training) which changes to random transformation $T_2^{\mathrm{te}}$ (in test). Let $X^{\dagger}$ describe a hidden variable such that $X^{\mathrm{tr}} := T_1 \circ T_2^{\mathrm{tr}} \circ T_3 \circ X^{\dagger}$ and $X^{\mathrm{te}} := T_1 \circ T_2^{\mathrm{te}} \circ T_3 \circ X^{\dagger}$ , where $T_1$ and $T_3$ are independent continuous random transformations and $P(T_2^{\mathrm{tr}}) \neq P(T_2^{\mathrm{te}})$ . Assume the target variable $Y$ depends only on $X^{\dagger}, T_1,$ and $T_3$ . To counterfactually ask what would have happened to the observed input $x$ if we had forced $do(T_2^{\mathrm{tr}} = \tilde{t}_2)$ , we are inquiring about $X(T_2^{\mathrm{tr}} = \tilde{t}_2)|X^{\mathrm{tr}} = x$ . Note that $do(T_2^{\mathrm{tr}} = \tilde{t}_2)$ does not change $Y$ . Also note that the knowledge of $X^{\mathrm{tr}} = x$ is an indirect statement about $T_2^{\mathrm{tr}}$ since $P(T_2^{\mathrm{tr}}|X^{\mathrm{tr}} = x) \neq P(T_2^{\mathrm{tr}})$ . That is, for $x, x' \in \mathcal{X}$ , + +$$ +P \left(X \left(T _ {2} ^ {\mathrm {t r}} = \tilde {t} _ {2}\right) = x ^ {\prime} \mid X ^ {\mathrm {t r}} = x\right) = \int_ {t} P \left(X \left(T _ {2} ^ {\mathrm {t r}} = \tilde {t} _ {2}\right) = x ^ {\prime} \mid T _ {2} ^ {\mathrm {t r}} = t, X ^ {\mathrm {t r}} = x\right) d P \left(T _ {2} ^ {\mathrm {t r}} = t \mid X ^ {\mathrm {t r}} = x\right). \tag {1} +$$ + +Equation (1) and the difference between the causal hierarchy layers will be relevant for our results. + +Contributions. Our contributions can be described as follows: + +1. We introduce a generalization of transformation groups via symmetry transformations tied to equivalence classes that removes the requirement of invertible transformations common in definitions using transformation groups. +2. We introduce the concept of counterfactual invariant representations for symmetry transformations and show how it can be described as a counterfactual task for causal structure discovery. +3. Finally, we introduce asymmetry learning, which describes a representation regularization that, under a set of assumptions, learns the correct counterfactual invariant OOD classifier. + +# 2 SYMMETRIES AND TRANSFORMATIONS + +Geometrically, an object is called symmetric if there is a transformation on the object that does not change its shape (in some definition of shape). For example, a square is symmetric with respect to rotations. The notion of symmetry however is not restricted to geometric notions. In general, we can define a mathematical object as symmetric if there is a transformation on the object that returns another object equivalent to the first (Rosen, 2008, Chapter 10). It is clear from this definition of symmetry that we first need to define what we mean by equivalent objects. For instance, we say two geometrical objects are equivalent if they have the same shape, but we need a more general definition. + +We define an input symmetry in a space $\mathcal{X}$ with at least two elements as an equivalence relation $\sim$ . An equivalence relation in $\mathcal{X}$ is a binary relation $\sim$ such that for all $a, b, c \in \mathcal{X}$ , we have (i) $a \sim a$ , (ii) $a \sim b \iff b \sim a$ , and (iii) $(a \sim b$ and $b \sim c) \implies a \sim c$ . Equivalence relations allow us to + +define equivalent objects in $\mathcal{X}$ : $a \sim b$ means $a$ is equivalent to $b$ . The set of all objects equivalent to some $a \in \mathcal{X}$ is called the equivalence class of $a$ , defined as $[a] := \{x \in \mathcal{X} : x \sim a\}$ . Note that one can define $m \geqslant 2$ equivalence relations on the same input space. The equivalence class of $x$ with respect to equivalence relation $k$ is denoted $[x]^{(k)}, k = 1, \ldots, m$ . Two inputs $a, b \in \mathcal{X}$ might be equivalent under one equivalence relation $\sim_1$ , but not equivalent under a different equivalence relation $\sim_2$ , that is, we can have both $b \in [a]^{(1)}$ and $b \notin [a]^{(2)}$ . Still, even in this last case it is possible that $a$ is equivalent to some other input $c \neq b$ in both equivalence relations, i.e., it is possible $\exists c \in \mathcal{X}, c \neq a$ , s.t. $c \in [a]^{(1)} \cap [a]^{(2)}$ . We denote the collection of equivalence classes of $\mathcal{X}$ under the equivalence relation $\sim_k$ as the quotient space $\mathcal{X} / \sim_k := \{[x]^{(k)} \mid x \in \mathcal{X}\}$ . + +Transformation group example. Consider the bijective transformations $t: \mathcal{X} \to \mathcal{X}$ of a transformation group $G$ , $t \in G$ . We now define an equivalence relation over $G$ as $t \circ x \sim_G x$ for all $t \in G$ . The equivalence class $[\pmb{x}]^{(G)}$ is $x$ 's orbit defined as $[\pmb{x}]^{(G)} := \{\pmb{x}' : \exists t \in G, \pmb{x}' = t \circ x\}$ . For example, if $G$ is the group that permutes the elements of vectors in $\mathbb{R}^3$ , then $(1, 2, 3) \sim_G (2, 1, 3)$ . + +Property functions example. Another way of deriving an equivalence relation is via functions of the input space $z: \mathcal{X} \to \mathbb{R}^p$ , where the output $z(\boldsymbol{x})$ is a particular property of the vector $\boldsymbol{x} \in \mathcal{X}$ . For example, given an observation of length $T$ from a dynamical system, $\boldsymbol{x} \in \mathbb{R}^{d \times T}$ , a possible property function could be $z_{\text{energy}}(\cdot)$ that computes the energy of the dynamical system. Assuming there are $m$ known properties $z_1, \ldots, z_m$ with $z_i: \mathcal{X} \to \mathbb{R}^{p_i}$ , we can construct corresponding equivalence relations $\sim_1, \ldots, \sim_m$ such that for any $\boldsymbol{x}, \boldsymbol{x}' \in \mathcal{X}$ , $\boldsymbol{x} \sim_i \boldsymbol{x}'$ if $z_j(\boldsymbol{x}) = z_j(\boldsymbol{x}')$ , $\forall j \neq i$ . In words, two inputs are equivalent under $\sim_i$ if they have the same properties for all $z_j, j \neq i$ . + +Symmetry transformations. As seen above, symmetries can be defined without defining how the input is transformed to create the equivalence classes, although defining a set of transformations is useful when describing the equivalence class. Given an equivalence relation $\sim$ , we can define a set of transformations $\mathcal{T}$ that respect the equivalence relation such that $\forall t \in \mathcal{T}, \forall x \in \mathcal{X}, t \circ x \sim x$ . We call $\mathcal{T}$ the set of symmetry transformations of $\sim$ . Similar to transformations groups, $\mathcal{T}$ always has the identity transformation $t_{\mathrm{id}} \circ x = x$ , but in contrast, all the transformations in $\mathcal{T}$ need not be bijective. + +Join of equivalence relations. Similar to how two groups can be joined to form a larger group, two equivalence relations can be joined to form a coarser equivalence relation. Given two equivalence relations, $\sim_{1}$ and $\sim_{2}$ , their join $\sim_{1} \vee \sim_{2}$ is defined as: for all $\pmb{x}, \pmb{x}', \pmb{x}(\sim_{1} \vee \sim_{2})\pmb{x}'$ if and only if there exists a chain of equivalence relations $\pmb{x} \sim_{k_1} \pmb{x}_1, \dots, \pmb{x}_{h-1} \sim_{k_h} \pmb{x}'$ with all $k_j \in \{1, 2\}$ . It is easy to check that $\sim_{1} \vee \sim_{2}$ is an equivalence relation. + +We are now ready to define a general causal model that defines the training and test distributions in our setting. + +# 3 SCM FOR SYMMETRY-BASED OOD TASKS + +Let $\mathcal{X},\mathcal{Y}$ denote the input and output spaces respectively. We define our general structural causal model (SCM) as follows. We define $X^{\dagger}\in \mathcal{X}$ as the unobserved canonically ordered input + +$$ +X ^ {\dagger} := g \left(U _ {u}\right), \tag {2} +$$ + +with $U_{u}$ a background random variable and $g: \mathcal{U}_u \to \mathcal{X}$ is a measurable map. This definition is general enough to define any task. + +There are $m$ possible symmetries given in the form of equivalence relations $\sim_1, \ldots, \sim_m$ over the input space $\mathcal{X}$ . Let $\mathcal{T}^{(k)}$ denote a set of symmetric transformations $t$ on $\mathcal{X}$ corresponding to the equivalence relation $\sim_k, 1 \leqslant k \leqslant m$ . In other words, for all $\pmb{x} \in \mathcal{X}$ and $t \in \mathcal{T}^{(k)}$ , we have $(t \circ \pmb{x}) \sim_k \pmb{x}$ . Similarly, let $\mathcal{T}$ be the set of all symmetric transformations with respect to the join equivalence relation $\sim_{1,\dots,m} \equiv \sim_1 \vee \dots \vee \sim_m$ . We can think of transformation $t \in \mathcal{T}$ as a path $\pmb{x} \xrightarrow{t^{(k_1)}} \pmb{x}_1 \cdots \pmb{x}_{h-1} \xrightarrow{t^{(k_h)}} \pmb{x}_h$ that starts at $\pmb{x}$ , applies a transformation $t^{(k_1)} \in \mathcal{T}^{(k_1)}$ to get $\pmb{x}_1 \in [\pmb{x}]^{(k_1)}$ , and so on until it stops and outputs a value $\pmb{x}_h$ , $h \geqslant 1$ . + +Let $U_{1},\ldots ,U_{m}$ be independent background variables associated with the $m$ symmetries, where $U_{i}\in \mathcal{U}_{i}$ , $i = 1,\dots,m$ . These background variables together select a function $t(U_1,\dots ,U_m)$ from the set $\mathcal{T}$ as follows. Each $U_{k}$ independently selects a countable sequence of transformations $t_{1,U_k}^{(k)},t_{2,U_k}^{(k)},\ldots \in T^{(k)}$ . Then, $t(U_{1},\ldots ,U_{m})$ is defined by interleaving these transformations + +![](images/5abcb3dedc655fe70eb258b117de1fcca82d32ca25a285cf8075512042a01633.jpg) +Figure 1: Example that illustrates a few important concepts. (a) Training data shows how Equations (2) to (4) define the training distribution $P(X^{\mathrm{tr}}, Y^{\mathrm{tr}})$ . Task: Given an image of a rod (shown in brown), we wish to predict the orientation of the rod, i.e., whether the rod is upright or flat ( $Y := h(U_{\mathrm{rot}})$ ). In this example, we have $\mathbb{D} = \{\mathrm{rot}\}$ (image rotations $0^\circ$ and $90^\circ$ ) and $\bar{\mathbb{D}} = \{\mathrm{trans}\}$ (horizontal translations of $-5, 0, +5$ units) as any horizontal translation does not affect the orientation of the rod. (b) The test data (only a single example shown) suffers an OOD shift through a different distribution over $P(U_{\mathrm{trans}})$ , where non-zero translations can happen before the second rotation. (c) Here we illustrate why an invariance that is good for traditional data augmentation, such as counting the brown pixels in the green shaded area, would fail in test if, say, a $+5$ units horizontal translation happens before a rotation. (d) Here we illustrate why counterfactual language is needed to define how the input data would change in the presence of changes to $U_{\mathrm{trans}}$ . Using counterfactuals, it is finally clear that the invariant representation must be able to also consider the number of brown pixels inside the horizontal purple and green bands (among other horizontal bands). + +![](images/a2c3a8827e2568ea5f2f9dd00bdde933d039c92abe43f2cd868422fe8542f319.jpg) + +$t(U_{1},\ldots ,U_{m}):= (t_{1,U_{1}}^{(1)}\circ \dots \circ t_{1,U_{m}}^{(m)})\circ \dots \circ (t_{r,U_{1}}^{(1)}\circ \dots \circ t_{r,U_{m}}^{(m)})\circ \dots$ to construct the path described above. Since $\mathcal{T}^{(1)},\mathcal{T}^{(2)},\ldots$ contain the identity transformation, $t(U_{1},\ldots ,U_{m})$ can be described by a finite sequences of transformations. The observed $X$ is the result of a transformation of $X^{\dagger}$ + +$$ +X := t \left(U _ {1}, \dots , U _ {m}\right) \circ X ^ {\dagger}. \tag {3} +$$ + +Finally, the label $Y$ is defined as a function of the untransformed canonical input $X^{\dagger}$ as + +$$ +Y := h \left(X ^ {\dagger}, \left(U _ {i}\right) _ {i \in \mathbb {D}}, U _ {Y}\right), \tag {4} +$$ + +where $\mathbb{D} \subseteq \{1, \ldots, m\}$ is unknown. This means that $Y$ is not invariant with respect to equivalence relations $\sim_{i}, i \in \mathbb{D}$ , i.e., examples $\pmb{x}$ and $\pmb{x}' \in [\pmb{x}]^{(i)}$ can have different labels. A distribution over the variables $U_{u}, \{U_{i}\}_{i=1}^{m}, U_{Y}$ entails a joint distribution $P(X,Y)$ over the observed variables. + +Illustrative SCM example. Figure 1 illustrates our data generation process. The training data Figure 1(a) has $X^{\dagger}$ defined as a centered upright brown rod (i.e., $X^{\dagger}$ is deterministic). The label $Y$ is defined by the rotation transformations $\mathcal{T}^{\mathrm{rot}} = \{T_{0^\circ}^{\mathrm{rot}}, T_{90^\circ}^{\mathrm{rot}}\}$ . The image can also be horizontally translated by $\{-5,0,5\}$ units via transformations $\mathcal{T}^{\mathrm{trans}} = \{T_{-5}^{\mathrm{trans}}, T_0^{\mathrm{trans}}, T_{+5}^{\mathrm{trans}}\}$ (only 0 and +5 translations are depicted), but $Y$ does not depend on these horizontal translations. The transformations applied to $X^{\dagger}$ are randomly chosen via $U_{\mathrm{rot}}$ and $U_{\mathrm{trans}}$ , which are two bidimensional vectors indexing a sequence four transformations that interleave rotations and translations (see Figure 1). A representation that counts the number of brown pixels in the green shaded area of $X^{\mathrm{tr}}$ is enough to achieve 100% accuracy in the training distribution. We formally define OOD distribution shifts next using Figure 1 for illustration. + +OD distribution shift. Let $\bar{\mathbb{D}} = \{1,\dots ,m\} \backslash \mathbb{D}$ be the complement of the set of symmetry relations $\mathbb{D}$ that $Y$ depends on. We define the OOD distribution shift between train and test as a shift in the distribution of $P((U_i)_{i\in \bar{\mathbb{D}}})$ , influencing the distribution of input transformations in Equation (3), which in turn can shift the distributions $P(X^{\mathrm{tr}}), P(Y^{\mathrm{tr}}|X^{\mathrm{tr}}), P(Y^{\mathrm{tr}},X^{\mathrm{tr}})$ to + +$P(X^{\mathrm{te}}), P(Y^{\mathrm{te}}|X^{\mathrm{te}}), P(Y^{\mathrm{te}}, X^{\mathrm{te}})$ respectively. Since $X$ does not causally affect $Y$ in our structural causal model (Equation (4)), changes in input transformations are able to shift $P(Y|X)$ . For example, in Figure 1(b) the test data (only a single example shown) could suffer an OOD shift due to a different distribution over $P(U_{\mathrm{trans}})$ that introduces non-zero translations before the second rotation. Note that the representation that counted the number of brown pixels in the green shaded area, which was perfect for the training inputs $X^{\mathrm{tr}}$ , will achieve poor accuracy in the test inputs $X^{\mathrm{te}}$ . + +Learning OOD classifiers. Equation (4) shows that the label $Y$ is invariant to changes in the distribution of $(U_i)_{i \in \mathbb{D}}$ in the test distribution, but we do not know $\mathbb{D}$ . Hence, if our representation of $X$ is invariant to changes in the distribution of $(U_i)_{i \in \mathbb{D}}$ , we will be able to perform the OOD task. + +# 4 ASYMMETRY LEARNING & FINDING THE RIGHT REPRESENTATION SYMMETRY FOR THE OOD TASK + +# 4.1 FINDING OOD-INVARIANT REPRESENTATIONS AS CAUSAL STRUCTURE DISCOVERY + +We first define the process of finding an OOD invariant representations for the symmetries $\{\sim_i\}_{i\in \bar{\mathbb{D}}}$ our classifier should be invariant to in the test data. Since $Y$ does not depend on $\{U_i\}_{i\in \bar{\mathbb{D}}}$ , we will make a representation of $X$ that is invariant to transformations driven by $\{U_i\}_{i\in \bar{\mathbb{D}}}$ . + +Definition 1 introduces the concept of counterfactual invariance for symmetry transformations. We note that this definition is less restrictive than the parallel work of Veitch et al. (2021, Definition 1.1): whereas Veitch et al. (2021, Definition 1.1) require invariance over the entire sample space, we only require invariance over the test support of transformation variable $U_{i}$ . The definitions are equivalent if the test support is the entire sample space of $U_{i}$ . + +Definition 1 (Counterfactual-invariant representations for symmetric transformations). Assume the SCM defined in Equations (2) to (4). A representation $\Gamma_i: \mathcal{X} \to \mathbb{R}^d$ , $d \geqslant 1$ , is counterfactual-invariant to the transformations $T_{1,U_i}, T_{2,U_i}, \ldots$ of equivalence relation $\sim_i$ , $1 \leqslant i \leqslant m$ , if + +$$ +\Gamma_ {i} (x) = \Gamma_ {i} (X (U _ {i} = \tilde {u} _ {i}) | X = x) +$$ + +almost everywhere, $\forall \tilde{u}_i\in \operatorname {supp}(U_i^{te}),\forall x\in \operatorname {supp}(X^{tr})$ , where $\operatorname {supp}(A)$ is the support of random variable $A$ . A representation $\Gamma_{\mathbb{S}}:\mathcal{X}\to \mathbb{R}^d$ $d\geqslant 1$ , is counterfactual-invariant to a subset $\mathbb{S}\subseteq \{1,\ldots ,m\}$ if it is jointly counterfactual-invariant to the transformation indices $\{U_j\}_{j\in \mathbb{S}}$ of equivalence relations $\{\sim_j\}_{j\in \mathbb{S}}$ + +We refer the reader to Equation (1) for the relationship between the counterfactual variables $X(U_i = \tilde{u})|U_i = u$ and $X(U_i = \tilde{u})|X = x$ . Figure 1(d) illustrates why counterfactual language is important for our task: It states that given an input $X^{\mathrm{tr}} = x$ we need to know how it would have been different if we had chosen a different distribution $P(U_{\mathrm{trans}})$ resulting in a different sequence of transformations $T_{1,U_{\mathrm{trans}}}, T_{2,U_{\mathrm{trans}}}$ . From Figure 1(c) it is clear that we cannot simply data-augment our training data with translations, since we would think that counting brown pixels in the green shaded area is an invariant representation for $U_{\mathrm{trans}}$ . + +Up until now we have not imposed restrictions on the types of transformations $\mathcal{T}^{(i)}, i = 1, \ldots, m$ , we consider in this work. Our next results require imposing conditions on these transformations. + +Definition 2 (Equivalence class lumpability). The quotient space $\mathcal{X} / \sim_{i}$ is the set of equivalence classes of $\mathcal{X}$ with respect to equivalence relation $\sim_{i}, i = 1, \ldots, m$ . Let $[\pmb{x}]^{(i)} \in \mathcal{X} / \sim_{i}$ be the equivalence class of $\pmb{x} \in \mathcal{X}$ with respect to equivalence relation $\sim_{i}$ . Then, $\mathcal{X} / \sim_{i}$ is said to be lumpable with respect to a transformation set $\mathcal{T}$ if $\forall [\pmb{x}]^{(i)} \in \mathcal{X} / \sim_{i}$ and $\forall t \in \mathcal{T}$ , + +$$ +\exists \left[ \boldsymbol {x} ^ {\prime} \right] ^ {(i)} \in \left(\mathcal {X} / \sim_ {i}\right) s. t. \boldsymbol {x} ^ {*} \in \left[ \boldsymbol {x} \right] ^ {(i)} \Rightarrow t \circ \boldsymbol {x} ^ {*} \in \left[ \boldsymbol {x} ^ {\prime} \right] ^ {(i)}. +$$ + +In words, if the lumpability condition in Definition 2 holds for an equivalence relation $\sim_{i}$ with respect to a set of transformations $\mathcal{T}$ , then every transformation in $\mathcal{T}$ maps all points within an equivalence class $[\pmb{x}]^{(i)}\in \mathcal{X} / \sim$ to points in another equivalence class $[\pmb{x}^{\prime}]^{(i)}\in (\mathcal{X} / \sim)$ . To illustrate the lumpability condition, consider two transformation groups $G_{1}$ and $G_{2}$ whose transformations commute, i.e., $\forall (t_1,t_2)\in G_1\times G_2$ , $t_1\circ t_2 = t_2\circ t_1$ . Then the equivalence classes imposed by $G_{i}$ , i.e., the orbits $[\pmb{x}]^{(i)} = \{t_i\circ \pmb {x}:\forall t_i\in G_i\}$ , are lumpable with respect to the transformations $G_{j}$ , for $i,j\in \{1,2\}$ and $j\neq i$ . + +![](images/7bb75813a1c0ed442d458cc009b46e6afe89d6ac412dbd883dd1fe54e0474001.jpg) +(i) Causal DAG + +![](images/b0032a7666bad397a645b17b5b7d7be750045abc2a3627b896fb3b5c111100b7.jpg) +(ii) Causal DAG in (i) with counterfactual-invariant representation of X + +![](images/a7822675e39ee07bf71132bf554cb041266fdbe9b7fe79c251fcdbc2984fbbdb.jpg) +(iii) Asymmetry learning: Causal model search using information in asymmetry (illustration with $m = 3$ ). Red arrows indicate the asymmetry being considered in the causal model. +(a) + +![](images/6fac570aee291569cf9f26e841f6d5aa664c0c1b1c349e595643a058ffdc3e93.jpg) +(b) +Figure 2: (a) (i) True causal DAG; (ii) causal DAG of counterfactual invariant representation; (iii) Causal model search. (b) Partial order over invariant representations (arrows indicate higher invariance). (c) An example figure where training data has a single example per equivalence class in $\mathcal{X} / \sim_{1}$ (green rectangles). Then, we have $\mathrm{COMP}(\mathcal{F}_{\{1\}},\mathcal{D}) = \mathrm{COMP}(\mathcal{F}_{\emptyset},\mathcal{D})$ even though $\mathcal{F}_{\{1\}}$ is more invariant (simpler) than $\mathcal{F}_{\emptyset}$ . + +![](images/50e382dad831edf2c6f93ca26388a1ddbc875fab4a71c7ac3f7427f03e679a34.jpg) +(c) + +Figure 2a(i) shows our structural causal graph where an edge $U_{i} \to Y$ exists only if $i \in \mathbb{D}$ . Then, we use the definition of lumpability to prove that, under certain conditions, a most-expressive representation $\Gamma_{i}$ invariant with respect to $\sim_{i}$ allows us to identify if there is no edge $U_{i} \to Y$ in the causal DAG. + +Theorem 1 (Counterfactual invariance & causal DAG identification). Let $\mathcal{X} / \sim_{i}$ be lumpable given every $\mathcal{T}^{(j)}, j \neq i$ as in Definition 2. Then, the structural causal DAG implied by Equations (2) to (4) (depicted in Figure 2a(i)) does not contain the edge $U_{i} \to Y$ iff + +$$ +\left| P (Y \mid \Gamma_ {i} (X), U _ {Y}) - P (Y \mid X, U _ {Y}) \right| _ {T V} = 0, \tag {5} +$$ + +$\forall P(X^{\dagger}), \forall P(U_1), \ldots, \forall P(U_m)$ , where $\Gamma_i$ is a most-expressive representation that is invariant with respect to $\sim_i$ . + +The proof is in the Appendix. With the lumpability assumption of $\mathcal{X} / \sim_{i}$ , $\Gamma_{i}$ in Theorem 1 is a counterfactual-invariant representation. We now use Figure 2a(ii) to describe the result in Theorem 1. We first note that the representation $\Gamma_{\bar{\mathbb{D}}}$ depicted in the figure is counterfactual invariant to $\bar{\mathbb{D}}$ , and hence also counterfactual invariant to $k \in \bar{\mathbb{D}}$ . Next we see that since the representation $\Gamma_{\bar{\mathbb{D}}}$ is counterfactual invariant to $U_{k}$ , there is no arrow $U_{k} \to \Gamma_{\bar{\mathbb{D}}} (X)$ in Figure 2a(ii). If there is no arrow $U_{k} \to Y$ , the missing arrows from $U_{k}$ to $\Gamma_{\bar{\mathbb{D}}} (X)$ will have no influence in the ability of $\Gamma_{\bar{\mathbb{D}}} (X)$ to predict $Y$ , assuming $\Gamma_{\bar{\mathbb{D}}}$ is most-expressive. If there is an arrow $U_{k} \to Y$ , cutting the arrow $U_{k} \to \Gamma_{\bar{\mathbb{D}}} (X)$ creates a loss in predictive performance from $\Gamma_{\bar{\mathbb{D}}} (X)$ to $Y$ for some distribution of the background and observable variables. If $\Gamma_{\bar{\mathbb{D}}} (X)$ never loses any predictive power over $Y$ for any distribution of the background and observable variables, then there is no arrow $U_{k} \to Y$ . + +Assumption 1 (Asymmetry learning training data). In asymmetry learning we assume that every $\mathcal{X} / \sim_{i}, i \in \{1, \ldots, m\}$ is lumpable given $\mathcal{T}^{(j)}, j \neq i$ , and that in a large training dataset sampled from $(Y^{tr}, X^{tr})$ , an arrow $U_{j} \to Y$ in the causal DAG of Figure 2a(i), $j \in \{1, \ldots, m\}$ , contains observations of $\{U_{j}\}_{j \in \mathbb{D}}$ that violate Equation (5). Hence, if Equation (5) holds for some $i \in \{1, \ldots, m\}$ in this dataset, we can conclude that there is no arrow $U_{i} \to Y$ in the true causal DAG. See Appendix A for a justification of this assumption. + +Next we use Assumption 1 and the previous results to search for the right OOD invariance. + +# 4.2 CAUSAL STRUCTURE DISCOVERY OF RELEVANT SYMMETRIES + +We need a general procedure for obtaining the unknown set $\mathbb{D}$ , which is equivalent to finding all transformations indices $\{U_i\}_{i \in \mathbb{D}} \subseteq \{U_1, \ldots, U_m\}$ that act as confounders between $Y$ and $X$ in the causal DAG in Figure 2a(i). Finding whether an edge exists or not in the causal DAG is known as the causal structure discovery problem (e.g., Heinze-Deml et al. (2017)). The principle of our search is learning the causal structure with the fewest possible edges into $Y$ (i.e., where $Y$ is invariant to most $U_i$ , $i = 1, \ldots, m$ ) while also maximizing the likelihood of the observed data. Accordingly, we take the score-based causal discovery approach (Chickering (2002); Huang et al. (2018)) that assigns scores to each allowed DAG based on the training data and the complexity of the DAG to find a minimal causal structure that fits the training data. This idea is visualized in Figure 2a(iii) + +where causal graphs with more edges between the transformation indices into $Y$ are defined to have higher complexity and are higher up in the partial ordering. Our search space is simpler than typical structure discovery tasks: The DAGs in our search space have the same structure for $X$ and only differ in edges of the form $U_{i} \rightarrow Y, i \in \{1, \dots, m\}$ . Next, we describe a scoring criterion that uses Theorem 1 and counterfactual-invariant representations to assign scores to the corresponding causal structures. + +Proposed DAG scoring criterion. For each DAG in the search space, we wish to assign a score based on the training data $\mathcal{D} = \{(x^{(i)},y^{(i)})_{i = 1}^{n^{\mathrm{tr}}}$ under Assumption 1 for a classification task with $C$ classes. Theorem 1 shows that there is a correspondence between a causal structure without the edge $U_{i}\rightarrow Y$ and a predictive probability gap between the original input and a most-expressive representation $\Gamma_{i}$ that is counterfactually-invariant to $U_{i}$ . Thus, under Assumption 1, we can represent the causal search from Figure 2a(iii) in terms of a search over counterfactually-invariant representation function classes as shown in Figures 2a(iii) and 2b. Formally, we are given a collection of function classes $\mathcal{F}:= \{\mathcal{F}_{\mathbb{S}}:\mathbb{S}\subseteq \{1,\dots,m\}\}$ , where $\mathcal{F}_{\mathbb{S}}$ is a family of functions $\Gamma_{\mathbb{S}}$ that are counterfactually-invariant to all $U_{i},i\in \mathbb{S}$ (Definition 1). We wish to score each of the function classes $\mathcal{F}_{\mathbb{S}}\in \mathcal{F}$ to indirectly learn the correct causal structure. + +The minimum description length (MDL) principle (Schwarz, 1978) is commonly used for causal structure discovery (Budhathoki & Vreeken, 2016; 2017) and comes with the key insight that learning from data can be viewed as compressing it. Given the collection $\mathcal{F}$ and the training dataset $\mathcal{D}$ , MDL finds the function class $\mathcal{F}_{\mathbb{S}} \in \mathcal{F}$ that compresses $\mathcal{D}$ the most. While there are several ways of encoding a dataset given the function class, normalized maximum likelihood (NML) code is known to be optimal (Shtarkov, 1987). NML code is computed as follows + +$$ +L _ {\mathrm {n m l}} \left(\mathcal {F} _ {\mathbb {S}}, \mathcal {D}\right) = - L \left(\mathcal {F} _ {\mathbb {S}} | \mathcal {D}\right) + \operatorname {C O M P} \left(\mathcal {F} _ {\mathbb {S}}, \mathcal {D}\right), \tag {6} +$$ + +where $L(\mathcal{F}_{\mathbb{S}}|\mathcal{D}) = \sup_{\Gamma_{\mathbb{S}}\in \mathcal{F}_{\mathbb{S}}}\sum_{i = 1}^{n^{\mathrm{tr}}} \log P(\boldsymbol{y}^{(i)}|\Gamma_{\mathbb{S}}(\boldsymbol{x}^{(i)}))$ is the maximum log-likelihood of $\mathcal{F}_{\mathbb{S}}$ given the data and + +$$ +\operatorname {C O M P} \left(\mathcal {F} _ {\mathbb {S}}, \mathcal {D}\right) = \log \left[ \sum_ {\substack {\boldsymbol {y} ^ {(1)}, \dots , \boldsymbol {y} ^ {(n ^ {\mathrm {t r}})}: \\ \boldsymbol {y} ^ {(i)} \in \{0, \dots , C \}}} \sup _ {\Gamma_ {\mathbb {S}} \in \mathcal {F} _ {\mathbb {S}}} \prod_ {i = 1} ^ {n ^ {\mathrm {t r}}} P \left(\boldsymbol {y} ^ {(i)} \mid \Gamma_ {\mathbb {S}} (\boldsymbol {x} ^ {(i)})\right) \right], \tag{7} +$$ + +measures the complexity of the function class $\mathcal{F}_{\mathbb{S}}$ by computing how well it can represent different label distributions for the given inputs $\{\pmb{x}^{(i)}\}_{i = 1}^{n_{i}^{\mathrm{tr}}}$ in training. We can estimate the combinatorial sum in Equation (7) by uniformly sampling random labels for all the training examples. + +Since $\mathrm{COMP}(\mathcal{F}_{\mathbb{S}},\mathcal{D})$ is computed using the training data, it may underestimate the complexity of function classes if, for instance, all the training examples are generated with $U_{i} = u_{i}$ . Then, $\mathcal{F}_{\{i\}}$ and $\mathcal{F}_{\emptyset}$ are given the same score even though $\mathcal{F}_{\{i\}}$ is clearly more invariant and thus, a simpler function class. This can happen in practice if, say, all images are upright in training with no rotations applied; both rotation-invariant and rotation-sensitive function classes get the same complexity score. + +In order to break the above ties of our COMP score, asymmetry learning adds an additional term to the NML score that chooses models that have higher invariance based on the partial order (see Figure 2b). We extend the penalty proposed by Mouli & Ribeiro (2021) and use $R(\mathcal{F}_{\mathbb{S}})\coloneqq |\{\mathcal{F}':\mathcal{F}'\in \mathcal{F},\mathcal{F}'>\mathcal{F}_{\mathbb{S}}\} |,$ the number of function classes that are higher in the partial order than $\mathcal{F}_{\mathbb{S}}$ , as the tie-breaking term. For example, in figure $R(\mathcal{F}_{\{1\}}) = |\{\mathcal{F}_{\{1,2\}},\mathcal{F}_{\{1,3\}},\mathcal{F}_{\{1,2,3\}}\} | = 3.$ We define the final score of each function class $\mathcal{F}_{\mathbb{S}}\in \mathcal{F}$ as + +$$ +S \left(\mathcal {F} _ {\mathbb {S}}, \mathcal {D}\right) = L _ {\mathrm {n m l}} \left(\mathcal {F} _ {\mathbb {S}}, \mathcal {D}\right) + R \left(\mathcal {F} _ {\mathbb {S}}\right). \tag {8} +$$ + +The score in Equation (8) can be minimized by a score-based causal discovery algorithm to obtain the final DAG. We use Greedy Equivalence Search (Chickering, 2002) to showcase a concrete instantiation of asymmetry learning. Other score-based structure discovery algorithms could also be used. + +Greedy Equivalence Search. Greedy Equivalence Search (GES) is a greedy search algorithm that optimizes a given scoring function over DAGs. In our setting, the search begins with a DAG with no edges of the form $U_{i} \to Y, i \in \{1, \dots, m\}$ . In the first phase, GES adds these edges one at a time + +Table 1: Results for different function classes on the pendulum task with $\mathbb{D} = \{1\}$ and $\mathbb{D} = \{1,2\}$ . $R(\mathcal{F})$ , $\widehat{\mathrm{COMP}}(\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ are discussed as in Section 4.2. Bold values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \overline{\mathbb{D}}})$ . + +
Model classArchitectureR(F)D = {1}D = {1,2}
COMP(F,D)S(F,D)Train Acc.Test Acc.COMP(F,D)S(F,D)Train Acc.Test Acc.
F2X → z1 → Y00.28223.8998.5 (0.9)98.3 (1.4)0.501532.8472.7 (0.4)69.4 (0.5)
F1X → z2 → Y00.382633.3263.8 (7.0)51.2 (1.0)0.292284.7585.2 (0.5)84.6 (0.2)
F∅X → Y21.25626.8098.9 (0.8)77.6 (11.5)0.9954.5499.7 (0.2)99.5 (0.2)
+ +that maximally improve the score in Equation (8) until there is no improvement. In the second phase, GES begins from the DAG obtained at the end of first phase and deletes edges one at a time until such deletions do not improve the score. The DAG obtained at the end of the second phase is the final output of the algorithm. Under the causal Markov and faithfulness assumptions, Chickering (2002) showed that GES is optimal in the large sample limit if the scoring function is locally consistent. + +# 5 RESULTS + +Pendulum task description. We evaluate the proposed method in a simulated classification task. Our input $\pmb{x}$ is a motion vector over time $(\theta_t, \frac{d\theta_t}{dt})_{t=1}^T$ of a simple pendulum of an unknown length $l$ after it is dropped from some initial angle $\theta_0$ with $\frac{d\theta_0}{dt} = 0$ . After an initial $\tau$ seconds of uninterrupted motion, we simulate an elastic collision by placing another object of same mass at the bottom. The classification task is to predict whether the kinetic energy imparted by the pendulum is enough to move the second object beyond a certain threshold. + +Physical properties and equivalence relations. We consider the following two properties of the dynamical system described above: $z_{1}:\mathcal{X}\to \mathbb{R}$ which computes the initial potential energy of the system and $z_{2}:\mathcal{X}\rightarrow \mathbb{R}$ which returns the time of collision. The equivalence relations $\sim_{1}$ and $\sim_{2}$ are defined using these properties as defined in Section 2. For instance, two pendulum motion curves $x,x^{\prime}$ are equivalent with respect to $\sim_{1}$ , i.e., $x\sim_{1}x^{\prime}$ , if they have the same time of collision, $z_{2}(\pmb {x}) = z_{2}(\pmb{x}^{\prime})$ . Then $\mathcal{T}^{(1)}$ consists of transformations that change the initial potential energy of the system (for example, by changing the length of the pendulum or the initial dropping angle $\theta_0$ ) while keeping the time of collision same. Similarly, $x\sim_{2}x^{\prime}$ if their respective potential energies are the same and transformations in $\mathcal{T}^{(2)}$ change the time of collision while keeping the same initial potential energies. Note that the space of equivalence classes $\mathcal{X}/\sim_{1}$ is lumpable with respect to $\mathcal{T}^{(2)}$ and vice versa (Definition 2). Thus, by Theorem 1, we can use predictive performance of counterfactual-invariant representations for scoring the causal DAGs. + +Unknown $\mathbb{D}$ and OOD classification. We consider two scenarios for the label $Y$ given $X$ . First, if the motion of the pendulum is not damped by friction, then $Y$ depends only on $z_{1}(\pmb{x})$ , i.e., $\mathbb{D} = \{1\}$ . Second, if the motion of the pendulum is damped, then $Y$ depends on both $z_{1}(\pmb{x})$ and $z_{2}(\pmb{x})$ , i.e., $\mathbb{D} = \{1,2\}$ . The extrapolation test data is generated by shifting the distribution of the background variables $\{U_i\}_{i\in \mathbb{D}}$ . The task of a structure discovery algorithm is to correctly identify $\mathbb{D}$ . + +Results. We use the greedy equivalence search (GES, Section 4.2) algorithm to search over the different causal graphs with the proposed scoring criterion defined in Equation (8). We build classes of counterfactual-invariant representations $\mathcal{F}_{\mathbb{S}}$ corresponding to each possible value of $\mathbb{S} \subsetneq \{1,2\}$ , where every $\Gamma_{\mathbb{S}} \in \mathcal{F}_{\mathbb{S}}$ is invariant to $\{U_i\}_{i \in \mathbb{S}}$ . For example, $\mathcal{F}_{\{1\}}$ is a family of feedforward neural networks that only take $z_2(x)$ as input, i.e., invariant to $z_1(x)$ , whereas $\mathcal{F}_{\emptyset}$ is a sequence model (e.g., LSTM) with no invariance. Table 1 reports the estimated complexity $\widehat{\mathrm{COMP}}(\mathcal{F},\mathcal{D})$ and the final scores $S(\mathcal{F},\mathcal{D})$ for the different function classes for the two tasks. The bold values indicate the function class chosen by the GES algorithm. When $\mathbb{D} = \{1\}$ , the greedy search stops after adding the edge $U_1 \to Y$ as adding the second edge $U_2 \to Y$ only worsens (increases) the score. When $\mathbb{D} = \{1,2\}$ , the greedy search is able to improve the score by adding both edges, first $U_1 \to Y$ and then $U_2 \to Y$ . In both the cases, the extrapolation test accuracy achieved by the chosen model class is the highest. + +Image classification task. Appendices A.4 and A.5 also offers an application to image classification using image transformation sets (groups and nongroups). + +# 6 RELATED WORK + +Counterfactual inference and invariances. Recent efforts have brought causal inference to machine learning (extensively reviewed in Schölkopf et al. (2021); Schölkopf (2022)). Invariant Causal Prediction (Peters et al., 2015; Heinze-Deml et al., 2018) and Invariant Risk Minimization methods (Arjovsky et al., 2019; Bellot & van der Schaar, 2020) learn representations that are invariant across multiple environments but have been shown to be insufficient for OOD generalization in classification tasks without additional assumptions Ahuja et al. (2021). Wang & Jordan (2021) use counterfactual language to formally define and learn non-spurious representations from a single environment that can extrapolate to new environments. Veitch et al. (2021) define counterfactual invariant predictors $f(X)$ when $X$ has a single parent $Z$ and provide conditions such predictors must satisfy over the observed distribution (given an SCM). Kaushik et al. (2020; 2021) propose counterfactual data augmentation for text datasets but they either require a fully-specified toy SCM or rely on humans-in-the-loop to generate the counterfactual data. Other counterfactual methods (Johansson et al., 2016; Shalit et al., 2017; Qidong et al., 2020) learn representations to predict counterfactual change in some observed variables whereas in our setting, the transformation variables $U_{i}$ that generate the observed $X$ are unobserved. In-depth comparison of our work with the existing counterfactual methods is presented in Appendix A.3. + +Domain adaptation and domain generalization. Domain adaptation and domain generalization (e.g. (Long et al., 2017; Muandet et al., 2013; Quionero-Candela et al., 2009; Rojas-Carulla et al., 2018; Shimodaira, 2000; Zhang et al., 2015) and others) consider observed or known shifts in the data distribution, for instance, given the test distribution $P(X^{\mathrm{te}})$ , rather than counterfactual questions. + +Causal structure discovery. The methods for causal structure discovery can be broadly classified into two categories. Constraint-based approaches (e.g., Spirtes et al. (2001); Sun et al. (2007)) use conditional independence tests and reject causal graphs that impose more independence than what is observed in data. On the other hand, score-based causal discovery approaches (e.g., Chickering (2002); Huang et al. (2018); Ding et al. (2020); Zhu et al. (2020)) assign scores to each allowed causal graph based on the data and find the one with best score. While there are several works (Budhathoki & Vreeken, 2016; 2017; Bornschein et al., 2021) that use minimum description length (MDL) (Schwarz, 1978) as a scoring criterion, we show why it is insufficient for out-of-distribution tasks and use an additional term for tie-breaking. Goudet et al. (2017) minimize the divergence between a distribution generated by a learnt causal DAG and the observed data distribution; however the method is limited to orienting edges over observed variables, whereas our transformation variables $U_{i}$ are unobserved. Recently, GFlowNets Bengio et al. (2021a,b) have been used to sample DAGs proportional to a score function for Bayesian structure learning Deleu et al. (2022), however we are interested in finding the best DAG with the minimum score. + +Group-invariant representations. Majority of the works strictly enforce G-invariances either within the architecture (e.g., Zaheer et al. (2017); Cohen et al. (2016); Lyle et al. (2020); Murphy et al. (2019a)) or via data-augmentation (Chen et al., 2020) and do not handle the case when the target is actually influenced by the transformation of the input. Other works (Benton et al., 2020; Zhou et al., 2020; van der Wilk et al., 2018; Anselmi et al., 2019) consider learning symmetries from the training data but do not consider the extrapolation task that we show can be solved only under certain conditions. Mouli & Ribeiro (2021) consider the special case where the transformations are from normal subgroups and do not formally describe the causal task. These works rely on invertible transformations while we define symmetries more generally via equivalence relations. Dubois et al. (2021) also define invariances via equivalence relations and, under the assumption that all such invariances hold in the data, the authors design methods for data compression. Our goal is rather different: We want to discover which equivalence relations (transformations thereof) affect the label. + +# 7 CONCLUSIONS + +This work considered an out-of-distribution (OOD) classification task where the shift between train and test environments is through different symmetry transformations of the input, where symmetry transformations are defined via equivalence relations over the input space. We described the task of finding symmetries that affect the label as a causal structure discovery task and show that, under certain conditions, we can use the predictive performance of invariant representations on the observational data to predict whether an edge exists in the causal DAG (Theorem 1). We then proposed an MDL-based scoring for this causal structure discovery. Finally, we test our approach in two simulated physics tasks and six image classification tasks. + +# ACKNOWLEDGMENTS + +This work was funded in part by the National Science Foundation (NSF) Awards CAREER IIS-1943364 and CCF-1918483, the Purdue Integrative Data Science Initiative, and the Wabash Heartland Innovation Network. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors. + +# REFERENCES + +Kartik Ahuja, Ethan Caballero, Dinghuai Zhang, Jean-Christophe Gagnon-Audet, Yoshua Bengio, Ioannis Mitliagkas, and Irina Rish. Invariance principle meets information bottleneck for out-of-distribution generalization. Advances in Neural Information Processing Systems, 34, 2021. +Fabio Anselmi, Georgios Evangelopoulos, Lorenzo Rosasco, and Tomaso Poggio. Symmetry-adapted representation learning. Pattern Recognition, 86:201-208, February 2019. ISSN 0031-3203. doi: 10.1016/j.patcog.2018.07.025. +Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019. +Elias Baireinboim, Juan Correa, Duligur Ibeling, and Thomas Icard. On Pearl's hierarchy and the foundations of causal inference. ACM special volume in honor of Judea Pearl, 2020. +Alexis Bellot and Mihaela van der Schaar. Accounting for unobserved confounding in domain generalization. arXiv preprint arXiv:2007.10653, 2020. +Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira, et al. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19:137, 2007. +Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, and Yoshua Bengio. Flow network based generative models for non-iterative diverse candidate generation. Advances in Neural Information Processing Systems, 34, 2021a. +Yoshua Bengio, Tristan Deleu, Edward J Hu, Salem Lahlou, Mo Tiwari, and Emmanuel Bengio. Gfownet foundations. arXiv preprint arXiv:2111.09266, 2021b. +Gregory Benton, Marc Finzi, Pavel Izmailov, and Andrew Gordon Wilson. Learning invariances in neural networks from data. NeurIPS, 2020. +Jorg Bornschein, Silvia Chiappa, Alan Malek, and Rosemary Nan Ke. Prequential MDL for Causal Structure Learning with Neural Networks. July 2021. +Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18-42, 2017. +Kailash Budhathoki and Jilles Vreeken. Causal Inference by Compression. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 41-50, Barcelona, Spain, December 2016. IEEE. ISBN 978-1-5090-5473-2. doi: 10.1109/ICDM.2016.0015. +Kailash Budhathoki and Jilles Vreeken. MDL for Causal Inference on Discrete Data. In 2017 IEEE International Conference on Data Mining (ICDM), pp. 751-756, November 2017. doi: 10.1109/ICDM.2017.87. +Shuxiao Chen, Edgar Dobriban, and Jane H. Lee. A group-theoretic framework for data augmentation. Journal of Machine Learning Research, 21(245):1-71, 2020. URL http://jmlr.org/papers/v21/20-163.html. +David Maxwell Chickering. Optimal Structure Identification With Greedy Search. Journal of Machine Learning Research, 3(Nov):507-554, 2002. ISSN ISSN 1533-7928. +Taco S Cohen, T S Cohen, and Uva Nl. Group Equivariant Convolutional Networks. pp. 10, 2016. +Elliot Creager, Jorn-Henrik Jacobsen, and Richard Zemel. Environment inference for invariant learning. In International Conference on Machine Learning, pp. 2189-2200. PMLR, 2021. + +Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D Hoffman, et al. Underspecification presents challenges for credibility in modern machine learning. arXiv preprint arXiv:2011.03395, 2020. +Tristan Deleu, Antonio Gois, Chris Emezue, Mansi Rankawat, Simon Lacoste-Julien, Stefan Bauer, and Yoshua Bengio. Bayesian structure learning with generative flow networks. arXiv preprint arXiv:2202.13903, 2022. +Chenwei Ding, Biwei Huang, Mingming Gong, Kun Zhang, Tongliang Liu, and Dacheng Tao. Score-based Causal Discovery from Heterogeneous Data. September 2020. +Yann Dubois, Benjamin Bloem-Reddy, Karen Ullrich, and Chris J Maddison. Lossy compression for lossless prediction. arXiv preprint arXiv:2106.10800, 2021. +Marc Finzi, Max Welling, and Andrew Gordon Wilson. A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups. arXiv preprint arXiv:2104.09459, 2021. +Robert Geirhos, Jorn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11):665-673, 2020. +Olivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, and Michèle Sebag. Causal generative neural networks. arXiv preprint arXiv:1711.08936, 2017. +Christina Heinze-Deml, Marloes H. Maathuis, and Nicolai Meinshausen. Causal Structure Learning. arXiv:1706.09141 [stat], June 2017. +Christina Heinze-Deml, Jonas Peters, and Nicolai Meinshausen. Invariant Causal Prediction for Nonlinear Models. arXiv:1706.08576 [stat], September 2018. +Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In Advances in Neural Information Processing Systems, 2020. +Biwei Huang, Kun Zhang, Yizhu Lin, Bernhard Scholkopf, and Clark Glymour. Generalized Score Functions for Causal Discovery. KDD: proceedings. International Conference on Knowledge Discovery & Data Mining, 2018:1551-1560, August 2018. ISSN 2154-817X. doi: 10.1145/3219819.3220104. +Maximilian Ilse, Jakub M Tomczak, and Patrick Forre. Selecting data augmentation for simulating interventions. In International Conference on Machine Learning, pp. 4555-4562. PMLR, 2021. +Fredrik Johansson, Uri Shalit, and David Sontag. Learning representations for counterfactual inference. In International conference on machine learning, pp. 3020-3029, 2016. +Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. Learning the difference that makes a difference with counterfactually-augmented data. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SklgsoNFvr. +Divyansh Kaushik, Amrith Setlur, Eduard H Hovy, and Zachary Chase Lipton. Explaining the efficacy of counterfactually augmented data. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=HHiiQKWsOcv. +Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Sara Beery, et al. Wilds: A benchmark of in-the-wild distribution shifts. arXiv preprint arXiv:2012.07421, 2020. +Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In International Conference on Machine Learning, pp. 2747-2755. PMLR, 2018. +Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. + +David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In International Conference on Machine Learning, pp. 5815-5826. PMLR, 2021. +Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In International conference on machine learning, pp. 2208-2217. PMLR, 2017. +Clare Lyle, Mark van der Wilk, Marta Kwiatkowska, Yarin Gal, and Benjamin Bloem-Reddy. On the benefits of invariance in neural networks. arXiv preprint arXiv:2005.00178, 2020. +Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. arXiv preprint arXiv:1812.09902, 2018. +S Chandra Mouli and Bruno Ribeiro. Neural network extrapolations with g-invariances from a single environment. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=7t1FcJUWhi3. +Krikamol Muandet, David Balduzzi, and Bernhard Scholkopf. Domain generalization via invariant feature representation. In International Conference on Machine Learning, pp. 10-18, 2013. +R. Murphy, B. Srinivasan, V. Rao, and B. Ribeiro. Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs. In International Conference on Learning Representations, 2019a. +Ryan Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Relational pooling for graph representations. In Proceedings of the 36th International Conference on Machine Learning, 2019b. +J Pearl and D Mackenzie. The ladder of causation. The book of why: the new science of cause and effect. New York (NY): Basic Books, pp. 23-52, 2018. +Jonas Peters, Peter Buhlmann, and Nicolai Meinshausen. Causal inference using invariant prediction: identification and confidence intervals. arXiv preprint arXiv:1501.01332, 2015. +Liu Qidong, Tian Feng, Ji Weihua, and Zheng Qinghua. A new representation learning method for individual treatment effect estimation: Split covariate representation network. In Asian Conference on Machine Learning, pp. 811-822. PMLR, 2020. +Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. Dataset shift in machine learning. The MIT Press, 2009. +Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, and Jonas Peters. Invariant models for causal transfer learning. The Journal of Machine Learning Research, 19(1):1309-1342, 2018. +Joseph Rosen. Symmetry rules: How science and nature are founded on symmetry. Springer Science & Business Media, 2008. +Elan Rosenfeld, Pradeep Ravikumar, and Andrej Risteski. The risks of invariant risk minimization. arXiv preprint arXiv:2010.05761, 2020. +Bernhard Schölkopf. Causality for machine learning. In Probabilistic and Causal Inference: The Works of Judea Pearl, pp. 765-804. 2022. +Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Toward causal representation learning. Proceedings of the IEEE, 109(5):612-634, 2021. +Gideon Schwarz. Estimating the Dimension of a Model. The Annals of Statistics, 6(2):461-464, March 1978. ISSN 0090-5364, 2168-8966. doi: 10.1214/aos/1176344136. +Uri Shalit, Fredrik D Johansson, and David Sontag. Estimating individual treatment effect: generalization bounds and algorithms. In International Conference on Machine Learning, pp. 3076-3085. PMLR, 2017. + +John Shawe-Taylor. Symmetries and discriminability in feedforward network architectures. IEEE Transactions on Neural Networks, 4(5):816-826, 1993. +Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227-244, 2000. +Yurii Mikhailovich Shtarkov. Universal sequential coding of single messages. Problemy Peredachi Informatii, 23(3):3-17, 1987. +Murray Sidman and William Tailby. Conditional discrimination vs. matching to sample: An expansion of the testing paradigm. Journal of the Experimental Analysis of behavior, 37(1):5-22, 1982. +Murray Sidman, Ricki Rauzin, Ronald Lazar, Sharon Cunningham, William Tailby, and Philip Carrigan. A search for symmetry in the conditional discriminations of rhesus monkeys, baboons, and children. Journal of the experimental analysis of behavior, 37(1):23-44, 1982. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. +Peter Spirtes, Clark Glymour, and Richard Scheines. Causation, Prediction, and Search, 2nd Edition. MIT Press Books, The MIT Press, 2001. +Xiaohai Sun, Dominik Janzing, Bernhard Scholkopf, and Kenji Fukumizu. A kernel-based causal learning algorithm. In Proceedings of the 24th International Conference on Machine Learning, ICML '07, pp. 855-862, New York, NY, USA, June 2007. Association for Computing Machinery. ISBN 978-1-59593-793-3. doi: 10.1145/1273496.1273604. +Damien Teney, Ehsan Abbasnedjad, and Anton van den Hengel. Learning what makes a difference from counterfactual examples and gradient supervision. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part X 16, pp. 580-599. Springer, 2020. +Mark van der Wilk, Matthias Bauer, ST John, and James Hensman. Learning invariances using the marginal likelihood. In Advances in Neural Information Processing Systems, pp. 9938-9948, 2018. +Victor Veitch, Alexander D'Amour, Steve Yadlowsky, and Jacob Eisenstein. Counterfactual invariance to spurious correlations: Why and how to pass stress tests. arXiv preprint arXiv:2106.00545, 2021. +Yixin Wang and Michael I. Jordan. Desiderata for Representation Learning: A Causal Perspective. arXiv:2109.03795 [cs, stat], September 2021. +Gesche Westphal-Fitch, Ludwig Huber, Juan Carlos Gomez, and W Tecumseh Fitch. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1598):2007-2022, 2012. +Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep Sets. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 3391-3401. Curran Associates, Inc., 2017. +Kun Zhang, Mingming Gong, and Bernhard Schölkopf. Multi-source domain adaptation: A causal view. In AAAI, volume 1, pp. 3150-3157, 2015. +Han Zhao, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. On learning invariant representations for domain adaptation. In International Conference on Machine Learning, pp. 7523-7532. PMLR, 2019. +Allan Zhou, Tom Knowles, and Chelsea Finn. Meta-Learning Symmetries by Reparameterization. arXiv:2007.02933 [cs, stat], October 2020. +Shengyu Zhu, Ignavier Ng, and Zhitang Chen. Causal Discovery with Reinforcement Learning. pp. 17, 2020. + +# A APPENDIX + +# A.1 JUSTIFICATION FOR ASSUMPTION 1. + +The above assumption is inspired by the deep relationship between symmetries and intelligence. Young children, unlike monkeys and baboons, assume that a conditional stimulus $\mathrm{F}$ given another stimulus $\mathrm{D}$ extrapolates to a symmetric relation $\mathrm{D}$ given $\mathrm{F}$ without ever seeing any such examples (Sidman et al., 1982). That is, if given $\mathrm{D}$ , action $\mathrm{F}$ produces a treat, the child assumes that given $\mathrm{F}$ , action $\mathrm{D}$ also produces a treat. Young children differ from primates in their ability to use symmetries to build conceptual relations beyond visual patterns (Sidman & Tailby, 1982; Westphal-Fitch et al., 2012), allowing extrapolations from intelligent reasoning. However, forcing symmetries against data evidence is undesirable, since symmetries can provide valuable information when they are broken. Unsurprising, humans are generally able to quickly find and pay attention to some types of asymmetries. + +# A.2 PROOF OF THEOREM 1 + +Theorem 1 (Counterfactual invariance & causal DAG identification). Let $\mathcal{X} / \sim_{i}$ be lumpable given every $\mathcal{T}^{(j)}, j \neq i$ as in Definition 2. Then, the structural causal DAG implied by Equations (2) to (4) (depicted in Figure 2a(i)) does not contain the edge $U_{i} \to Y$ iff + +$$ +\left| P \left(Y \mid \Gamma_ {i} (X), U _ {Y}\right) - P \left(Y \mid X, U _ {Y}\right) \right| _ {T V} = 0, \tag {5} +$$ + +$\forall P(X^{\dagger}), \forall P(U_1), \ldots, \forall P(U_m)$ , where $\Gamma_i$ is a most-expressive representation that is invariant with respect to $\sim_i$ . + +Proof. Notation (following Equation (3)): The observed input $X$ is $X := t(U_1, \ldots, U_{i-1}, u_i, U_{i+1}, \ldots, U_m) \circ X^\dagger$ where $t(U_1, \ldots, U_m)$ is obtained by interleaving the transformation sequences from each individual $U_1, \ldots, U_m$ and we have set $U_i = u_i$ . + +Necessity: We wish to show that if the SCM does not contain edge $U_{i} \to Y$ , then Equation (5) holds for all $P(X^{\dagger}), P(U_1), \ldots, P(U_m)$ . By this assumption, $Y$ outputs the same label for any value of $U_{i}$ . Consider the collection of equivalence classes $\mathcal{X} / \sim_{i}$ . By the lumpability condition of Definition 2, all transformations $t^{(j)} \in \mathcal{T}^{(j)}, j \neq i$ , map all points in one equivalence class of $\sim_{i}$ to points in a different one. On the other hand, all transformations $t^{(i)} \in \mathcal{T}^{(i)}$ map points to other points within the same equivalence class under $\sim_{i}$ . Now, consider the equivalence class of $X$ after all the transformations have been applied to $X^{\dagger}$ . The equivalence class of $X = t(U_{1},\dots ,U_{i - 1},u_{i},U_{i + 1},\dots ,U_{m}) \circ X^{\dagger}$ is the same as that of $X^{*} = t(U_{1},\dots ,U_{i - 1},u_{i}^{\mathrm{id}},U_{i + 1},\dots ,U_{m}) \circ X^{\dagger}$ where $U_{i} = u_{i}^{\mathrm{id}}$ always selects identity transformations. This is because changing $u_{i}$ to $u_{i}^{\mathrm{id}}$ only impacts the transformations chosen from $\mathcal{T}^{(i)}$ , and these transformations do not change the equivalence class under $\sim_{i}$ . Thus, we have shown that we reach the same equivalence class under $\sim_{i}$ for both $X$ and $X^{*}$ . + +Now let $\Gamma_{i}$ be a most-expressive representation that is invariant with respect to $\sim_{i}$ . By definition, $\Gamma_{i}$ outputs the same value within an equivalence class, thus, $\Gamma_{i}(X) = \Gamma_{i}(X^{*})$ . But since by assumption $U_{i} \to Y$ does not exist, $X$ and $X^{*}$ have the same label always. Thus, there is no loss of information incurred by $\Gamma_{i}$ in predicting $Y$ with the additional restraint $\Gamma_{i}(X) = \Gamma_{i}(X^{*})$ . Since $\Gamma_{i}$ is most-expressive, we have $P(Y = y|\Gamma_{i}(X), U_{Y}) = P(Y = y|X, U_{Y})$ for all $y \in \mathcal{V}$ . This holds for all values of $u_{i}$ , and hence we get the desired result for any distribution $P(U_{i})$ . + +Sufficiency: We wish to show that if Equation (5) holds for all $P(X^{\dagger})$ and $P(U_1),\ldots ,P(U_m)$ , then there is no edge $U_{i}\to Y$ in the causal graph. We will prove by contrapositive: Assume there is an edge $U_{i}\rightarrow Y$ , then we will show there exists distributions $P(X^{\dagger})$ and $P(U_{1}),\dots ,P(U_{m})$ such that Equation (5) does not hold. + +Define $P(X^{\dagger}) = \delta_{x^{\dagger}}$ for some $x^{\dagger} \in \mathcal{X}$ where $\delta$ denotes a Dirac-delta function. Define $P(U_i = u_i^{\mathrm{id}}) = 0.5$ and $P(U_i = u_i) = 0.5$ for $u_i^{\mathrm{id}}, u_i \in \operatorname{supp}(U_i)$ . As usual, $u_i^{\mathrm{id}}$ always selects the identity transformation, and $u_i$ selects a single transformation $t_{u_i} \in \mathcal{T}^{(i)}$ . Similarly, for all $j \neq i$ , define $P(U_j) = \delta_{u_j^{\mathrm{id}}}$ for $u_j^{\mathrm{id}} \in \operatorname{supp}(U_j)$ that only select identity transformations. Now, there are two possible observed inputs: $\boldsymbol{x} = t(u_1^{\mathrm{id}}, \ldots, u_m^{\mathrm{id}}) \circ x^{\dagger} = x^{\dagger}$ and $\boldsymbol{x}' = t(u_1^{\mathrm{id}}, \ldots, u_i, \ldots, u_m^{\mathrm{id}}) \circ x^{\dagger} = t_{u_i} \circ x^{\dagger}$ . Finally, define $Y := \mathbf{1}(U_i = u_i^{\mathrm{id}})$ , thus $\boldsymbol{x}$ and $\boldsymbol{x}'$ have different labels. But, any invariant + +representation $\Gamma_{i}$ by definition has $\Gamma_{i}(\pmb {x}) = \Gamma_{i}(\pmb{x}^{\prime})$ since they belong to the same equivalence class. Thus, even if $\Gamma_{i}$ is most-expressive, we have $|P(Y|\Gamma_i(X),U_Y) - P(Y|X,U_Y)|_{\mathrm{TV}} = 0.5.$ + +![](images/e8f27dfd740387831e73ad79e9628bcec57dc3eb8b6f91a5d31930ed55a7f04b.jpg) + +# A.3 ADDITIONAL RELATED WORK + +Counterfactual invariances. Wang & Jordan (2021) use counterfactual language to formally define and learn non-spurious, disentangled representations from a single environment. Our work is different in the following ways. In the structural causal model (SCM) of their work, the authors assume that there are no confounders between the observed $X$ and the label $Y$ . However, in our SCM (Figure 2a(i)), we allow unobserved confounders $X^{\dagger}$ and $U_{i}, i \in \mathbb{D}$ . The hidden transformation variables $U_{i}, i \in \mathbb{D}$ are confounders because they affect both the observed input $X$ and the labels $Y$ . We leverage the fact that the confounders are related to symmetries (and do not affect $X$ arbitrarily) to resolve the issue with unobserved confounding. Wang & Jordan (2021) also require pinpointability of the cause of the observed $X$ . In our setting, this is typically not possible since there are multiple paths of transformations from $X^{\dagger}$ to the same observed $X$ . Thus, all the parents of $X$ may not be pinpointable, specifically the transformation variables $U_{1}, \ldots, U_{m}$ . + +Kaushik et al. (2020; 2021) propose counterfactual data augmentation for text datasets where human annotators are asked to make minimal modifications to the input document so as to change its label (for example, by changing a few positive words to negative words) while keeping style, etc. fixed. This type of augmentation essentially asks the labelers to identify all the causal features in the document and make modifications to those features alone. This can be seen as obtaining new counterfactual examples by simulating the causal model and requires knowing the true function that describes how the features affect the labels. We consider the more realistic setting where we do not have access to such a collection of counterfactual examples. In this work, we consider the traditional automated data augmentations under a mostly unknown data generation process, as opposed to the counterfactual data augmentation (Kaushik et al., 2020) that either considers a fully-specified toy SCM or relies on humans-in-the-loop to generate counterfactual data. + +In Figure 1(c) we show that the standard data augmentation is not sufficient for the OOD task. However, if one had access to the fully-specified causal model, one could generate the counterfactual data shown in Figure 1(d) and learn an OOD classifier with the counterfactually augmented data (as done by Kaushik et al. (2020)). But our work does not assume access to these counterfactual examples. Additionally, we prove that a counterfactual invariant classifier can be constructed from traditional data augmentation alone if the lumpability condition (Definition 2) is satisfied. This is not the case in Figure 1(d). + +Veitch et al. (2021) define counterfactual invariant predictors $f(X)$ when $X$ has a single parent $Z$ and provide conditions such predictors must satisfy over the observed distribution (given an SCM). Note also that Veitch et al. (2021) assume that part of the observed input $X(X_{Z}^{\perp})$ is not causally influenced by the confounder $Z$ . In our scenarios this is not generally true. For example, under a color change, the entire observed image $X$ changes. Still, we show that the notion of a counterfactual invariant predictor exists. Hence, the definition of Veitch et al. (2021, Lemma 3.1) of a counterfactually invariant predictor that requires a segment of $X$ to not causally depend on $Z$ , a fundamental result of their work, unfortunately does not apply to our setting (since $X$ may have no such segment). + +# A.4 MNIST-\{3,4\} EXPERIMENTS WITH FINITE TRANSFORMATION GROUPS + +We test our proposed method on out-of-distribution tasks on images where the equivalence relations (symmetries) are provided as transformation groups (e.g., $90^{\circ}$ rotations). We use the MNIST-\{3, 4\} (colored) dataset (Mouli & Ribeiro, 2021) that only contains digits 3 and 4, and follow their experimental setup. MNIST-\{3, 4\} is used to avoid any confounding factors while testing if the proposed method can learn the correct invariances, not for any practical considerations (e.g., rotated 6 is a 9 and would interfere with some experiments, etc.). + +We consider equivalence relations obtained from 3 different transformation groups: rotations by $90^{\circ}$ (denoted $G_{\mathrm{rot}}$ ), vertically flipping the image (denoted $G_{\mathrm{v - flip}}$ ), and permuting the RGB color channels of the image (denoted $G_{\mathrm{col}}$ ). The 3 corresponding equivalence relations are lumpable (Definition 2) with respect to the transformations in the other two groups in almost all the cases. Only exception + +Table 2: Results for different function classes on the MNIST-{3,4} classification task with $\bar{\mathbb{D}} =$ {rot,col,vflip}, $\mathbb{D} = \varnothing$ , i.e., task is invariant to 3 groups (D) and sensitive to none (D). $R(\mathcal{F})$ $\widehat{\mathrm{COMP}} (\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ are as discussed in Section 4.2. Bold values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \bar{\mathbb{D}}})$ . We see that the $S(\mathcal{F},\mathcal{D})$ loss selects the correct model class in training. + +
Model classR(F)+ COMP(F, D)+ NLL(F, D)= S(F, D)Train AccTest Acc
F{}76639.3100.0136646.324100.00 (0.00)48.38 (5.22)
F{vflip}36639.2410.0796642.320100.00 (0.00)47.08 (5.34)
F{col}36639.2410.0296642.270100.00 (0.00)53.92 (2.47)
F{col,vflip}16639.2410.0996640.340100.00 (0.00)53.15 (1.83)
F{rot}36639.2410.0376642.278100.00 (0.00)53.06 (10.00)
F{rot,vflip}16639.2410.5806640.821100.00 (0.01)54.86 (13.60)
F{rot,col}16639.2410.0436640.284100.00 (0.00)90.29 (6.76)
F{rot,col,vflip}06639.2410.2106639.451100.00 (0.00)92.02 (2.99)
+ +Table 3: Results for different function classes on the MNIST-{3,4} classification task with $\bar{\mathbb{D}} = \{\mathrm{rot},\mathrm{vflip}\} ,\mathbb{D} =$ {col}, i.e., task is invariant to rotation and vertical flip groups (D) but sensitive to color (D). $R(\mathcal{F})$ $\widehat{\mathrm{COMP}} (\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ are as discussed in Section 4.2. Bold values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \bar{\mathbb{D}}})$ . We see that the $S(\mathcal{F},\mathcal{D})$ loss selects the correct model class in training. + +
Model classR(F)+ COMP(F, D)+ NLL(F, D)= S(F, D)Train AccTest Acc
F{}76639.2410.0106646.251100.00 (0.00)54.79 (0.74)
F{vflip}36639.2410.0126642.253100.00 (0.00)55.05 (1.56)
F{col}36639.2408269.48014911.72041.98 (5.79)18.81 (2.94)
F{col,vflip}16639.2418275.71614915.95742.71 (4.07)18.62 (2.25)
F{rot}36638.9460.1326642.078100.00 (0.00)91.40 (3.19)
F{rot,vflip}16638.4280.5046639.932100.00 (0.00)92.32 (1.84)
F{rot,col}16639.2418412.95415053.19437.20 (1.97)29.25 (5.18)
F{rot,col,vflip}06639.2398389.71915028.95838.01 (2.02)29.98 (3.96)
+ +is the equivalence relation $\sim_{\mathrm{v - flip}}$ , which is not lumpable with respect to the transformations in $G_{\mathrm{rot}}$ . Consequently, we do not consider a task with invariance to vertical flip alone. We test our method on the same 4 classification tasks proposed by Mouli & Ribeiro (2021) where each task represents the case where the target $Y$ has different invariances, i.e., invariant to all three groups, to two, to one, invariant to none (the task is sensitive to the remaining groups). + +We use the VGG architecture (Simonyan & Zisserman, 2014) for image classification and construct a collection of function classes $\mathcal{F} \coloneqq \{\mathcal{F}_{\mathbb{S}} : \mathbb{S} \subseteq \{\mathrm{rot}, \mathrm{col}, \mathrm{v}\text{-flip}\}\}$ corresponding to various invariant representations. For example, $\mathcal{F}_{\{\mathrm{rot}, \mathrm{col}\}}$ is a space of functions (CNNs) that are G-invariant to the rotation and color-permutation groups ( $G_{\mathrm{rot}}$ and $G_{\mathrm{col}}$ ), and $\mathcal{F}_{\emptyset}$ is the space of functions with no invariance (standard CNN). + +Results. Our results are shown in Tables 2 to 5 for the four tasks respectively where the label is (i) invariant to all three groups, (ii) invariant to only rotation and vertical flips, (iii) invariant to color-permutation, and (iv) invariant to none. We show the values for $R(\mathcal{F})$ , $\widehat{\mathrm{COMP}}(\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ as discussed in Section 4.2. Bold values in the tables indicate the function class chosen by GES method with the proposed scoring criterion (minimizing $S(\mathcal{F},\mathcal{D})$ ). Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \bar{\mathbb{D}}})$ (i.e., by applying the transformations that the label is invariant to). + +In Tables 2 and 3, we see that the proposed method selects the correct model class in training and achieves the best OOD test accuracy. In Tables 4 and 5, the method is excessively invariant (to vertical flip) but still achieves within $1\%$ of the best OOD test accuracy. The OOD test accuracy of a + +Table 4: Results for different function classes on the MNIST-{3,4} classification task with $\bar{\mathbb{D}} = \{\mathrm{col}\}$ $\mathbb{D} =$ rot,vflip}, i.e., task is invariant to color (D) but sensitive to rotation and vertical flips (D). $R(\mathcal{F}),\widehat{\mathrm{COMP}} (\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ are as discussed in Section 4.2. Bold values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \bar{\mathbb{D}}})$ . We see that the $S(\mathcal{F},\mathcal{D})$ loss selects a model that is excessively invariant in training, but the test accuracy is not that much penalized by the extra invariance (vertical flips). + +
Model classR(F)+ COMP(F, D)+ NLL(F, D)= S(F, D)Train AccTest Acc
F{}76639.2412.3956648.636100.00 (0.01)16.87 (5.88)
F{vflip}36639.2335.3706647.60399.99 (0.05)15.71 (5.53)
F{col}36639.1962.3156644.512100.00 (0.00)97.28 (0.28)
F{col,vflip}16639.2403.0986643.337100.00 (0.00)96.82 (0.54)
F{rot}36639.2285296.75511938.98456.17 (3.90)6.20 (0.86)
F{rot,vflip}16639.2215325.00811965.22855.96 (5.39)7.24 (1.48)
F{rot,col}16639.2185322.01511962.23356.14 (3.31)47.98 (1.34)
F{rot,col,vflip}06639.2305342.80511982.03555.32 (3.80)49.25 (3.09)
+ +Table 5: Results for different function classes on the MNIST-\{3,4\} classification task with $\bar{\mathbb{D}} = \emptyset, \mathbb{D} = \{\text{rot}, \text{col}, \text{vflip}\}$ , i.e., task is sensitive to all three groups ( $\mathbb{D}$ ) and insensitive to none ( $\bar{\mathbb{D}}$ ). $R(\mathcal{F}), \widehat{\mathrm{COMP}}(\mathcal{F}, \mathcal{D})$ and $S(\mathcal{F}, \mathcal{D})$ are as discussed in Section 4.2. **Bold** values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i \in \bar{\mathbb{D}}})$ . We see that the $S(\mathcal{F}, \mathcal{D})$ loss selects a model that is excessively invariant in training, but the test accuracy is not that much penalized by the extra invariance (vertical flip). + +
Model classR(F)+ COMP(F, D)+ NLL(F, D)= S(F, D)Train AccTest Acc
F{}76639.1651.1956647.360100.00 (0.00)96.00 (0.60)
F{vflip}36639.1173.5486645.665100.00 (0.00)95.18 (0.45)
F{col}36639.1927536.16714178.35958.77 (3.34)32.45 (2.18)
F{col,vflip}16639.1847902.46214542.64552.50 (7.64)31.21 (2.48)
F{rot,col}16639.08813628.35620268.44323.78 (2.25)15.93 (0.71)
F{rot}36639.1535259.95711902.11058.12 (4.05)47.23 (1.89)
F{rot,vflip}16639.8275267.77111908.59857.13 (1.38)47.57 (2.15)
F{rot,col,vflip}06639.05513705.12320344.17822.97 (3.32)16.13 (2.22)
+ +standard CNN with no invariance $(\mathcal{F}_{\emptyset})$ is typically very low except in Table 5 where sensitivity to all groups is required. We can also see the importance of $R(\mathcal{F})$ for tie-breaking in these experiments. As discussed in Section 4.2, $\widehat{\mathrm{COMP}} (\mathcal{F},\mathcal{D})$ is unable to distinguish between the different function classes because the training data contains a single example per equivalence class (see Figure 2c). + +# A.5 CIFAR10 EXPERIMENTS WITH INFINITE/NONGROUP TRANSFORMATION SETS + +In this section, we test our proposed method on out-of-distribution tasks on CIFAR10 images (Krizhevsky et al., 2009) where the equivalence relations are provided as infinite sets of transformations that may not form a group. We used (a) arbitrary rotation transformations over an image (denoted $\mathcal{T}_{\mathrm{rot}}$ ), and (b) shifting the hue of an image (denoted $\mathcal{T}_{\mathrm{col}}$ ). Note that for a bounded image, arbitrary rotation is not a group due to cropping. Further, transformations from the respective sets commute with each other, and hence, the lumpability condition is satisfied (Definition 2) for the corresponding equivalence relations. + +We tested our method on 2 classification tasks: (i) invariant to both sets of transformations (arbitrary rotations and hue shifts), and (ii) invariant to arbitrary rotations, but sensitive to hue shifts. As before, we use the VGG architecture (Simonyan & Zisserman, 2014) for image classification and construct a collection of function classes $\mathcal{F} \coloneqq \{\mathcal{F}_{\mathbb{S}} : \mathbb{S} \subseteq \{\mathrm{rot}, \mathrm{col}\}\}$ corresponding to the various invariant representations. We use data augmentation to construct these invariant representations (this is possible since the lumpability condition holds). For example, $\mathcal{F}_{\{\mathrm{rot}, \mathrm{col}\}}$ refers to CNNs that were trained by + +Table 6: Results for different function classes on the CIFAR10 classification task with two sets of transformations (transformations that do not form groups) on images: arbitrary rotations (with cropping due to rotation) and arbitrary hue shifts. The task is invariant to both sets of transformations $(\overline{\mathbb{D}})$ and sensitive to none $(\mathbb{D})$ . $R(\mathcal{F})$ , $\widehat{\mathrm{COMP}}(\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ are as discussed in Section 4.2. Bold values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \overline{\mathbb{D}}})$ . We see that the $S(\mathcal{F},\mathcal{D})$ loss selects the correct model class in training. + +
Model classR(F)+ COMP(F, D)+ NLL(F, D)= S(F, D)Train AccTest Acc
F{}327725.87517496.61545225.49085.6021.48
F{col}127716.94722715.95650433.90381.2821.85
F{rot}1-60894.14520365.793-40527.35282.6545.12
F{rot, col}0-66262.15723538.768-42723.39079.9969.35
+ +Table 7: Results for different function classes on the CIFAR10 classification task with two sets of transformations (transformations that do not form groups) on images: arbitrary angle rotations (with cropping due to rotation) and arbitrary hue shifts. The task is invariant to arbitrary rotations of the image $(\mathbb{D})$ but sensitive to color $(\mathbb{D})$ . $R(\mathcal{F})$ , $\widehat{\mathrm{COMP}}(\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ are as discussed in Section 4.2. Bold values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \mathbb{D}})$ . We see that the $S(\mathcal{F},\mathcal{D})$ loss selects the correct model class in training. + +
Model classR(F)+ COMP(F, D)+ NLL(F, D)= S(F, D)Train AccTest Acc
F{}327724.25642166.99369894.25064.3717.16
F{col}127715.02349744.68077460.70342.6910.91
F{rot}1-91370.53346218.086-45151.44761.7752.60
F{rot,col}0-92009.18450246.908-41762.27641.4535.56
+ +augmenting both arbitrarily rotated images and hue-shifted images. Once again, $\mathcal{F}_{\emptyset}$ is the space of functions with no invariance (standard CNN with no data augmentations). + +Results. We show in Tables 6 and 7 that our method is able to find the correct invariance and achieves the best OOD test accuracy whereas the standard CNN with no invariance has poor OOD performance. + +# A.6 MORE ON LUMPABILITY (DEFINITION 2) + +We show that the lumpability condition of Definition 2 is equivalent to the normal subgroup condition of Mouli & Ribeiro (2021, Theorem 2) when the given equivalence relations are obtained from transformation groups. However, unlike the normal subgroup condition, the lumpability condition applies in the general case when the equivalence relations are not necessarily obtained via transformation groups. + +Proposition 1. Let $\sim_{G_1}$ and $\sim_{G_2}$ be two equivalence relations on the input space $\mathcal{X}$ obtained as orbits under transformation groups $G_1$ and $G_2$ respectively, i.e., for $i = 1,2$ , $\pmb{x} \sim_{G_i} \pmb{x}'$ iff there exists $t^{(i)} \in G_i$ with $\pmb{x}' = t^{(i)} \circ \pmb{x}$ . Then, $\sim_{G_1}$ is lumpable with respect to the transformations $G_2$ (Definition 2) if and only if $G_1$ is a normal subgroup of $G_1 \vee G_2$ , where $\vee$ is the join operator. + +Proof. First, given $\sim_{G_1}$ is lumpable with respect to $G_2$ , we wish to prove that $G_1$ is a normal subgroup of $G_1 \vee G_2$ . By definition of the join operator on transformation groups, $G_1$ is a subgroup of $G_1 \vee G_2$ . + +Next, consider an equivalence class $[x]_{G_1} \in \mathcal{X} / \sim_{G_1}$ . Then, by the lumpability of $\sim_{G_1}$ with respect to $G_2$ , we have that for all $t^{(2)} \in G_2$ , there exists $[x']_{G_1}$ with $x^* \in [x]_{G_1} \Rightarrow t^{(2)} \circ x^* \in [x']_{G_1}$ . In other words, each $t^{(2)}$ maps all points in one equivalence class $[x]_{G_1}$ to another equivalence class + +$[\pmb{x}^{\prime}]_{G_1}$ . Specifically, $t^{(2)}$ maps $\pmb{x} \in [\pmb{x}]_{G_1}$ to $t^{(2)} \circ \pmb{x} \in [\pmb{x}^{\prime}]_{G_1}$ . Thus, we can set $\pmb{x}^{\prime} = t^{(2)} \circ \pmb{x}$ without loss of generality. + +Then, for all $t^{(2)} \in G_2$ , we have from the lumpability condition that + +$$ +\boldsymbol {x} ^ {*} \in [ \boldsymbol {x} ] _ {G _ {1}} \Longrightarrow t ^ {(2)} \circ \boldsymbol {x} ^ {*} \in \left[ t ^ {(2)} \circ \boldsymbol {x} \right] _ {G _ {1}}. \tag {9} +$$ + +Recall from the definition of the equivalence class derived from a transformation group (i.e., the orbit) that $\pmb{x}^{*} \in [\pmb{x}]_{G_{1}}$ means that there exists a transformation $t^{(1)} \in G_{1}$ that maps $\pmb{x}$ to $\pmb{x}^{*}$ , i.e., $\pmb{x}^{*} = t^{(1)} \circ \pmb{x}$ . Similarly, $t^{(2)} \circ \pmb{x}^{*} \in [t^{(2)} \circ \pmb{x}]_{G_{1}}$ means that there exists another transformation $\tilde{t}^{(1)}$ such that $t^{(2)} \circ \pmb{x}^{*} = \tilde{t}^{(1)} \circ t^{(2)} \circ \pmb{x}$ . + +Equation (9) then becomes + +$$ +\exists t ^ {(1)} \in G _ {1} \text {s . t .} \boldsymbol {x} ^ {*} = t ^ {(1)} \circ \boldsymbol {x} \Rightarrow \exists \tilde {t} ^ {(1)} \in G _ {1} \text {s . t .} t ^ {(2)} \circ \boldsymbol {x} ^ {*} = \tilde {t} ^ {(1)} \circ t ^ {(2)} \circ \boldsymbol {x}, \tag {10} +$$ + +for all $t^{(2)}\in G_2$ + +Since Equation (10) holds for all $\pmb{x}^{*} \in [\pmb{x}]_{G_{1}}$ and for all $\pmb{x} \in \mathcal{X}$ , we have $\forall t^{(2)} \in G_2, \forall t^{(1)} \in G_1, \exists \tilde{t}^{(1)} \in G_1$ such that, + +$$ +t ^ {(2)} \circ t ^ {(1)} = \tilde {t} ^ {(1)} \circ t ^ {(2)}, +$$ + +which implies that $G_{1}$ is a normal subgroup of $G_{1} \vee G_{2}$ . The converse can be proved trivially by reversing the steps of the above proof. + +![](images/434dc85d8f70b0beef5657ce90da9ed2816a10c6df69191b0384f261be4e9c3b.jpg) \ No newline at end of file diff --git a/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/images.zip b/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9c614ef3918bbdfb7a4041ea69841c7a6ff43cef --- /dev/null +++ b/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6e272f152a3b1da70638d7ba6e766c147a53d35ce49ab507ad92ba0839be13ae +size 565083 diff --git a/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/layout.json b/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..96d74c49a36a2f5b1110473930c58fb78f0804ea --- /dev/null +++ b/asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5f5f3bd812216839987fe36b4a792f41ba43b70a4f8ebedf4e20b84fff439752 +size 1008502 diff --git a/beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_content_list.json b/beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..66fc97e8fb651eef569de467065a0da5b9c357c3 --- /dev/null +++ b/beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7a4b6471d9bdae45a5901fe98c5e30068bae0ad7dc13b0ae7303768fa8ff78cf +size 101171 diff --git a/beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_model.json b/beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_model.json new file mode 100644 index 0000000000000000000000000000000000000000..029b2d92f913939866fab95041c9d690a0a06467 --- /dev/null +++ b/beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0560217af38ab18bd8865281c37a624a752b3453b7ad28ebf9f446b62cc36e90 +size 121043 diff --git a/beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_origin.pdf b/beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ff791d14eb5a4f3f39e18dd5248695ab8bdea5c6 --- /dev/null +++ b/beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:21c5145443c8f13f0832d32e56806ecfe4c9350c7bbeede58fb439a5a8ac1729 +size 856952 diff --git a/beitbertpretrainingofimagetransformers/full.md b/beitbertpretrainingofimagetransformers/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cb080f2dfcddb5f3539cc92a49fd310ddf6a7ff9 --- /dev/null +++ b/beitbertpretrainingofimagetransformers/full.md @@ -0,0 +1,338 @@ +# BEiT: BERT PRE-TRAINING OF IMAGE TRANSFORMERS + +Hangbo Bao†, Li Dong†, Songhao Piao†, Furu Wei‡ + +† Harbin Institute of Technology +$\ddagger$ Microsoft Research + +https://github.com/microsoft/unilm + +# ABSTRACT + +We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Following BERT (Devlin et al., 2019) developed in the natural language processing area, we propose a masked image modeling task to pretrain vision Transformers. Specifically, each image has two views in our pre-training, i.e., image patches (such as $16 \times 16$ pixels), and visual tokens (i.e., discrete tokens). We first "tokenize" the original image into visual tokens. Then we randomly mask some image patches and fed them into the backbone Transformer. The pre-training objective is to recover the original visual tokens based on the corrupted image patches. After pre-training BEiT, we directly fine-tune the model parameters on downstream tasks by appending task layers upon the pretrained encoder. Experimental results on image classification and semantic segmentation show that our model achieves competitive results with previous pre-training methods. + +# 1 INTRODUCTION + +Transformer (Vaswani et al., 2017) has achieved promising performance in computer vision (Dosovitskiy et al., 2020; Touvron et al., 2020). However, empirical studies show that vision Transformers require more training data than convolutional neural networks. In order to solve the data-hungry issue (Liu et al., 2021a), self-supervised pre-training is a promising solution to leverage large-scale image data. Several strands of methods have been explored for vision Transformers, such as contrastive learning (Chen et al., 2021; Xie et al., 2021), and self-distillation (Caron et al., 2021). + +Concurrently, BERT (Devlin et al., 2019) has achieved great success in natural language processing. Its masked language modeling task first randomly masks some proportion of tokens within a text, and then recovers the masked tokens based on the Transformer encoding results of the corrupted text. Motivated by BERT, we turn to the denoising auto-encoding idea to pretrain vision Transformers, which has not been well studied by the vision community. It is challenging to directly apply BERT-style pre-training for image data. First of all, there is no pre-exist vocabulary for vision Transformer's input unit, i.e., image patches. So we cannot simply employ a softmax classifier to predict over all possible candidates for masked patches. In contrast, the language vocabulary, such as words and BPE (Sennrich et al., 2016), is well-defined and eases auto-encoding prediction. A straightforward alternative is regarding the task as a regression problem, which predicts the raw pixels of masked patches. However, such pixel-level recovery task tends to waste modeling capability on pre-training short-range dependencies and high-frequency details (Ramesh et al., 2021). Our goal is to overcome the above issues for pre-training of vision Transformers. + +In this work, we introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Inspired by BERT, we propose a pre-training task, namely, masked image modeling (MIM). As shown in Figure 1, MIM uses two views for each images, i.e., image patches, and visual tokens. We split the image into a grid of patches that are the input representation of backbone Transformer. Moreover, we "tokenize" the image to discrete visual tokens, which is obtained by the latent codes of discrete VAE (Ramesh et al., 2021). + +![](images/a3045f3fbe9d7b8ef8f3539142e11452a7c5529aa10276b25f2a3ce4e8c0cd6b.jpg) +Figure 1: Overview of BEiT pre-training. Before pre-training, we learn an "image tokenizer" via autoencoding-style reconstruction, where an image is tokenized into discrete visual tokens according to the learned vocabulary. During pre-training, each image has two views, i.e., image patches, and visual tokens. We randomly mask some proportion of image patches (gray patches in the figure) and replace them with a special mask embedding [M]. Then the patches are fed to a backbone vision Transformer. The pre-training task aims at predicting the visual tokens of the original image based on the encoding vectors of the corrupted image. + +During pre-training, we randomly mask some proportion of image patches, and feed the corrupted input to Transformer. The model learns to recover the visual tokens of the original image, instead of the raw pixels of masked patches. + +We perform self-supervised learning and then fine-tune the pretrained BEiT on two downstream tasks, i.e., image classification, and semantic segmentation. Experimental results indicate that BEiT outperforms both from-scratch training and previous strong self-supervised models. Moreover, BEiT is complementary to supervised pre-training. Performance of BEiT can be further improved by intermediate fine-tuning with ImageNet labels. Ablation studies show that our proposed techniques are critical to the effectiveness of BERT-style pre-training for image data. Apart from performance, the improvements of convergence speed and stability of fine-tuning reduce training costs on end tasks. In addition, we demonstrate that self-supervised BEiT can learn reasonable semantic regions via pre-training, unleashing the rich supervision signals contained in images. + +Our contributions are summarized as follows: + +- We propose a masked image modeling task to pretrain vision Transformers in a self-supervised manner. We also provide a theoretical explanation from the perspective of variational autoencoder. +- We pretrain BEiT and conduct extensive fine-tuning experiments on downstream tasks, such as image classification, and semantic segmentation. +- We present that the self-attention mechanism of self-supervised BEiT learns to distinguish semantic regions and object boundaries, although without using any human annotation. + +# 2 METHODS + +Given an input image $x$ , BEiT encodes it to contextualized vector representations. As shown in Figure 1, BEiT is pretrained by the masked image modeling (MIM) task in a self-supervised learning manner. MIM aims at recovering the masked image patches based on encoding vectors. For + +downstream tasks (such as image classification, and semantic segmentation), we append task layers upon pretrained BEiT and fine-tune the parameters on the specific datasets. + +# 2.1 IMAGE REPRESENTATIONS + +The images have two views of representations in our method, namely, image patch, and visual tokens. The two types serve as input and output representations during pre-training, respectively. + +# 2.1.1 IMAGE PATCH + +The 2D image is split into a sequence of patches (Dosovitskiy et al., 2020), so that a standard Transformer can directly accept image data. Formally, we reshape the image $\pmb{x} \in \mathbb{R}^{H \times W \times C}$ into $N = HW / P^2$ patches $\pmb{x}^p \in \mathbb{R}^{N \times (P^2C)}$ , where $C$ is the number of channels, $(H, W)$ is the input image resolution, and $(P, P)$ is the resolution of each patch. The image patches $\{\pmb{x}_i^p\}_{i=1}^N$ are flattened into vectors and are linearly projected, which is similar to word embeddings in BERT (Devlin et al., 2019). Image patches preserve raw pixels and are used as input features in BEiT. + +In our experiments, we split each $224 \times 224$ image into a $14 \times 14$ grid of image patches, where each patch is $16 \times 16$ . + +# 2.1.2 VISUALTOKEN + +Similar to natural language, we represent the image as a sequence of discrete tokens obtained by an "image tokenizer", instead of raw pixels. Specifically, we tokenize the image $\boldsymbol{x} \in \mathbb{R}^{H \times W \times C}$ into $z = [z_1, \ldots, z_N] \in \mathcal{V}^{h \times w}$ , where the vocabulary $\mathcal{V} = \{1, \ldots, |\mathcal{V}|\}$ contains discrete token indices. + +Following (Ramesh et al., 2021), we use the image tokenizer learned by discrete variational autoencoder (dVAE). There are two modules during visual token learning, namely, tokenizer and decoder. The tokenizer $q_{\phi}(\boldsymbol{z}|\boldsymbol{x})$ maps image pixels $\boldsymbol{x}$ into discrete tokens $\boldsymbol{z}$ according to a visual codebook (i.e., vocabulary). The decoder $p_{\psi}(\boldsymbol{x}|\boldsymbol{z})$ learns to reconstruct the input image $\boldsymbol{x}$ based on the visual tokens $\boldsymbol{z}$ . The reconstruction objective can be written as $\mathbb{E}_{\boldsymbol{z}\sim q_{\phi}(\boldsymbol{z}|\boldsymbol{x})}[\log p_{\psi}(\boldsymbol{x}|\boldsymbol{z})]$ . Because the latent visual tokens are discrete, the model training is non-differentiable. Gumbel-softmax relaxation (Jang et al., 2017; Maddison et al., 2017) is employed to train the model parameters. Moreover, a uniform prior is put on $q_{\phi}$ during dVAE training. Refer to (Ramesh et al., 2021) for more training details of the image tokenizer. + +We tokenize each image to a $14 \times 14$ grid of visual tokens. Notice the number of visual tokens and the number of image patches for one image are the same. The vocabulary size is set to $|\mathcal{V}| = 8192$ . In our work, we directly use the publicly available1 image tokenizer described in (Ramesh et al., 2021). We also compare it with a re-implemented tokenizer in Appendix C. + +# 2.2 BACKBONE NETWORK: IMAGE TRANSFORMER + +Following ViT (Dosovitskiy et al., 2020), we use the standard Transformer (Vaswani et al., 2017) as the backbone network. So the results can be directly compared with previous work in terms of the network architecture. + +The input of Transformer is a sequence of image patches $\{\pmb{x}_i^p\}_{i=1}^N$ . The patches are then linearly projected to obtain patch embeddings $\pmb{E}\pmb{x}_i^p$ , where $\pmb{E} \in \mathbb{R}^{(P^2C) \times D}$ . Moreover, we prepend a special token [S] to the input sequence. We also add standard learnable 1D position embeddings $\pmb{E}_{pos} \in \mathbb{R}^{N \times D}$ to patch embeddings. The input vectors $\pmb{H}_0 = [\pmb{e}_{[S]}, \pmb{E}\pmb{x}_i^p, \dots, \pmb{E}\pmb{x}_N^p] + \pmb{E}_{pos}$ is fed into Transformer. The encoder contains $L$ layers of Transformer blocks $\pmb{H}^l = \mathrm{Transformer}(\pmb{H}^{l-1})$ , where $l = 1, \dots, L$ . The output vectors of the last layer $\pmb{H}^L = [\pmb{h}_{[S]}^L, \pmb{h}_1^L, \dots, \pmb{h}_N^L]$ are used as the encoded representations for the image patches, where $\pmb{h}_i^L$ is the vector of the $i$ -th image patch. + +# 2.3 PRE-TRAINING BEIT: MASKED IMAGE MODELING + +We propose a masked image modeling (MIM) task. We randomly mask some percentage of image patches, and then predict the visual tokens that are corresponding to the masked patches. + +Figure 1 shows the overview of our method. As presented in Section 2.1, given an input image $\pmb{x}$ , we split it into $N$ image patches $(\{\pmb{x}_i^p\}_{i=1}^N)$ , and tokenize it to $N$ visual tokens $(\{z_i\}_{i=1}^N)$ . We randomly mask approximately $40\%$ image patches, where the masked positions are denoted as $\mathcal{M} \in \{1, \dots, N\}^{0.4N}$ . Next we replace the masked patches with a learnable embedding $e_{[\mathbb{M}]} \in \mathbb{R}^D$ . The corrupted image patches $x^{\mathcal{M}} = \{x_i^p : i \notin \mathcal{M}\}_{i=1}^N \cup \{e_{[\mathbb{M}]} : i \in \mathcal{M}\}_{i=1}^N$ are then fed into the $L$ -layer Transformer as described in Section 2.2. The final hidden vectors $\{h_i^L\}_{i=1}^N$ are regarded as encoded representations of the input patches. For each masked position $\{h_i^L : i \in \mathcal{M}\}_{i=1}^N$ , we use a softmax classifier to predict the corresponding visual tokens $p_{\mathrm{MIM}}(z'|x^{\mathcal{M}}) = \mathrm{softmax}_{z'}(W_c h_i^L + b_c)$ where $x^{\mathcal{M}}$ is the corrupted image, $W_c \in \mathbb{R}^{|\mathcal{V}| \times D}$ , and $b_c \in \mathbb{R}^{|\mathcal{V}|}$ . The pre-training objective is to maximize the log-likelihood of the correct visual tokens $z_i$ given the corrupted image: + +$$ +\max \sum_ {x \in \mathcal {D}} \mathbb {E} _ {\mathcal {M}} \left[ \sum_ {i \in \mathcal {M}} \log p _ {\mathrm {M I M}} \left(z _ {i} \mid x ^ {\mathcal {M}}\right) \right] \tag {1} +$$ + +where $\mathcal{D}$ is the training corpus, $\mathcal{M}$ represents randomly masked positions, and $x^{\mathcal{M}}$ is the corrupted image that is masked according to $\mathcal{M}$ . + +Rather than randomly choosing patches for the masked positions $\mathcal{M}$ , we employ blockwise masking in our work. As summarized in Algorithm 1, a block of image patches is masked each time. For each block, we set the minimum number of patches to 16. Then we randomly choose an aspect ratio for the masking block. We repeat the above two steps until obtaining enough masked patches, i.e., $0.4N$ , where $N$ is the total number of image patches, and 0.4 is masking ratio. + +Algorithm 1 Blockwise Masking +Input: $N(h\times w)$ image patches +Output: Masked positions $\mathcal{M}$ $\mathcal{M}\gets \{\}$ +repeat + $s\gets \mathrm{Rand}(16,0.4N - |\mathcal{M}|)$ ▷ Block size + $r\gets \mathrm{Rand}(0.3,\frac{1}{0.3})$ ▷ Aspect ratio of block + $a\gets \sqrt{s\cdot r};b\gets \sqrt{s / r}$ $t\gets \mathrm{Rand}(0,h - a);l\gets \mathrm{Rand}(0,w - b)$ $\mathcal{M}\gets \mathcal{M}\bigcup \{(i,j):i\in [t,t + a),j\in [l,l + b)\}$ +until $|\mathcal{M}| > 0.4N$ ▷ Masking ratio is $40\%$ +return $\mathcal{M}$ + +The MIM task is greatly inspired by masked language modeling (Devlin et al., 2019), which is one of the most successful pre-training objective in natural language processing. Moreover, blockwise (or n-gram) masking is also widely applied in BERT-like models (Joshi et al., 2020; Bao et al., 2020; Raffel et al., 2020). However, directly using pixel-level auto-encoding (i.e., recovering the pixels of masked patches) for vision pre-training pushes the model to focus on short-range dependencies and high-frequency details (Ramesh et al., 2021). BEiT overcomes the above issue by predicting discrete visual tokens, which summarizes the details to high-level abstractions. Ablation studies in Section 3.3 show that our proposed method significantly outperforms pixel-level auto-encoding. + +# 2.4 FROM THE PERSPECTIVE OF VARIATIONAL AUTOENCODER + +The BEiT pre-training can be viewed as variational autoencoder (Kingma & Welling, 2014) training. Let $x$ denote the original image, $\tilde{x}$ the masked image, and $z$ the visual tokens. Considering the evidence lower bound (ELBO) of the log-likelihood $p(x|\tilde{x})$ , i.e., recovering the original image from its corrupted version: + +$$ +\sum_ {\left(x _ {i}, \tilde {x} _ {i}\right) \in \mathcal {D}} \log p \left(x _ {i} \mid \tilde {x} _ {i}\right) \geq \sum_ {\left(x _ {i}, \tilde {x} _ {i}\right) \in \mathcal {D}} \left(\underbrace {\mathbb {E} _ {z _ {i} \sim q _ {\phi} (\mathbf {z} \mid x _ {i})} \left[ \log p _ {\psi} \left(x _ {i} \mid z _ {i}\right) \right]} _ {\text {V i s u a l T o k e n R e c o n s t r u c t i o n}} - D _ {\mathrm {K L}} \left[ q _ {\phi} (\mathbf {z} \mid x _ {i}), p _ {\theta} (\mathbf {z} \mid \tilde {x} _ {i}) \right]\right) \tag {2} +$$ + +where (1) $q_{\phi}(z|x)$ denotes the image tokenizer that obtains visual tokens; (2) $p_{\psi}(x|z)$ decodes the original image given input visual tokens; (3) $p_{\theta}(z|\tilde{x})$ recovers the visual tokens based on the masked image, which is our MIM pre-training task. + +We learn the model following a two-stage procedure similar to (van den Oord et al., 2017; Razavi et al., 2019). In the first stage, we obtain the image tokenizer as a discrete variational autoencoder (Ramesh et al., 2021). Specifically, the first stage minimizes the reconstruction loss + +$-\mathbb{E}_{z_i \sim q_\phi(\mathbf{z} | x_i)}[\log p_\psi(x_i | z_i)]$ with an uniform prior as described in Equation (2). In the second stage, we learn the prior $p_\theta$ while keeping $q_\phi$ and $p_\psi$ fixed. We simplify $q_\phi(\mathbf{z} | x_i)$ to a one-point distribution with the most likely visual tokens $\hat{z}_i = \arg \max_z q_\phi(z | x_i)$ . Then Equation (2) can be rewritten as: + +$$ +\sum_ {\left(x _ {i}, \tilde {x} _ {i}\right) \in \mathcal {D}} \left(\underbrace {\mathbb {E} _ {z _ {i} \sim q _ {\phi} (z \mid x _ {i})} [ \log p _ {\psi} \left(x _ {i} \mid z _ {i}\right) ]} _ {\text {S t a g e 1 : V i s u a l T o k e n R e c o n s t r u c t i o n}} + \underbrace {\log p _ {\theta} \left(\hat {z} _ {i} \mid \tilde {x} _ {i}\right)} _ {\text {S t a g e 2 : M a s k e d I m a g e M o d e l i n g}}\right) \tag {3} +$$ + +where the second term is our BEiT pre-training objective. + +# 2.5 PRE-TRAINING SETUP + +The network architecture of BEiT follows that of ViT-Base (Dosovitskiy et al., 2020) for a fair comparison. We use a 12-layer Transformer with 768 hidden size, and 12 attention heads. The intermediate size of feed-forward networks is 3072. We employ the default $16 \times 16$ input patch size. We directly borrow the image tokenizer trained by Ramesh et al. (2021). The vocabulary size of visual tokens is 8192. + +We pretrain BEiT on the training set of ImageNet-1K (Russakovsky et al., 2015), which contains about $1.2\mathrm{M}$ images. Our augmentation policy includes random resized cropping, horizontal flipping, color jittering (Wu et al., 2018). Notice that we do not use the labels for self-supervised learning. We use the $224 \times 224$ resolution in our experiments. So the input is split to $14 \times 14$ image patches, and the same amount of visual tokens. We randomly mask at most 75 patches (i.e., roughly $40\%$ of total image patches). + +The pre-training runs for about 500k steps (i.e., 800 epochs) with 2k batch size. Adam (Loshchilov & Hutter, 2019) with $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ is employed for optimization. The learning rate is set to 1.5e-3, with a warmup of 10 epochs, and cosine learning rate decay. The weight decay is 0.05. We employ stochastic depth (Huang et al., 2016) with a 0.1 rate, and disable dropout. The 500k training steps take about five days using 16 Nvidia Telsa V100 32GB GPU cards. + +We find that proper initialization is important to stabilize Transformer, especially for large-scale pretraining. We first randomly initialize all the parameters within a small range, such as $[-0.02, 0.02]$ . Then, for the $l$ -th Transformer layer, we rescale the output matrices (i.e., the last linear projection within each sub-layer) of the self-attention module and the feed-forward network by $\frac{1}{\sqrt{2l}}$ . + +# 2.6 FINE-TUNING BEIT ON DOWNSSTREAM VISION TASKS + +After pre-training BEiT, we append a task layer upon the Transformer, and fine-tune the parameters on downstream tasks, like BERT. We take image classification and semantic segmentation as examples in our work. It is straightforward to leverage the pre-training-then-fine-tuning paradigm on other vision tasks with BEiT. + +Image classification. For image classification tasks, we directly employ a simple linear classifier as the task layer. Specifically, we use average pooling to aggregate the representations, and feed the global to a softmax classifier. The category probabilities are computed as $\mathrm{softmax}(\mathrm{avg}(\{h_i^L\}_{i=1}^N W_c))$ , where $h_i^L$ is the final encoding vector of the $i$ -th image patch, $W_c \in \mathbb{R}^{D \times C}$ is a parameter matrix, and $C$ is the number of labels. We maximize the likelihood of labeled data by updating the parameters of BEiT and the softmax classifier. + +Semantic segmentation. For semantic segmentation, we follow the task layer used in SETR-PUP (Zheng et al., 2020). To be specific, we use pretrained BEiT as a backbone encoder, and incorporate several deconvolution layers as decoder to produce segmentation. The model is also end-to-end fine-tuned similar to image classification. + +Intermediate fine-tuning. After self-supervised pre-training, we can further train BEiT on a data-rich intermediate dataset (i.e., ImageNet-1K in our work), and then finetune the model on the target downstream tasks. Such intermediate fine-tuning is the common practice of BERT fine-tuning in NLP (Pruksachatkun et al., 2020). We directly follow the method for BEiT. + +# 3 EXPERIMENTS + +We conduct full fine-tuning experiments on image classification and semantic segmentation. Moreover, we present various ablation studies for pre-training and analyze the representations learned by BEiT. We also report linear probes on ImageNet in Appendix D. + +# 3.1 IMAGE CLASSIFICATION + +The image classification task classifies input images to various categories. We evaluate BEiT on the ILSVRC-2012 ImageNet dataset (Russakovsky et al., 2015) with 1k classes and 1.3M images. We directly follow the most of hyperparameters of DeiT (Touvron et al., 2020) in our fine-tuning experiments for a fair comparison. We reduce fine-tuning epochs compared with training from scratch, as BEiT has been pre-trained. Accordingly, we use a larger learning rate with layer-wise decay. The detailed hyperparameters are summarized in Appendix H. + +Table 1 reports top-1 accuracy on image classification. We compare BEiT with vision Transformers trained by random initialization, supervised pre-training, and previous self-supervised learning methods. All the compared models are base-size, except iGPT has 1.36B parameters. Pre-training is conducted on ImageNet for the comparison purpose, except ViT-JFT300M is pretrained on Google's in-house 300M images. + +Compared with the models trained by random initialization, we find that pre-trained BEiT significantly improves performance on both datasets. BEiT improves the performance on ImageNet, which shows the effectiveness under the rich-resource setting. + +Moreover, we compare BEiT with previous state-of-the-art self-supervised methods for Transformer, such as DINO (Caron et al., 2021), and MoCo v3 (Chen et al., 2021). Our proposed method outperforms previous models on ImageNet fine-tuning. Among them, iGPT-1.36B (Chen et al., 2020a) uses much more parameters (i.e., 1.36B vs 86M), and ViT-JFT300M (Dosovitskiy et al., 2020) is pretrained on larger corpus (i.e., 300M vs 1.3M), while others pretrain ViT-Base on ImageNet-1K. iGPT-1.36B and ViT-JFT300M are the most comparable methods, which also follows auto-encoding pre-training for vision Transformer. Specifically, iGPT uses clustered image tokens as both input and output for image GPT or image BERT. In contrast, we use image patches as input to preserve raw pixels, and employ discrete visual tokens as a prediction bottleneck. ViT-JFT300 predicts the mean, 3-bit color of each masked patch, rather than visual tokens learned by discrete VAE. We also pretrain the self-supervised tasks of BEiT and DINO in a multi-task learning manner, which is presented in Appendix E. + +In addition, we evaluate our proposed method with intermediate fine-tuning. In other words, we first pretrain BEiT in a self-supervised manner, and then fine-tune the pretrained model on ImageNet with labeled data. The results show that BEiT is complementary to supervised pre-training, achieving additional gain after intermediate fine-tuning on ImageNet. + +Fine-tuning to $384 \times 384$ resolution. After fine-tuning with resolution $224 \times 224$ , we additionally fine-tune the model on $384 \times 384$ images by 10 more epochs. We follow the standard higher-resolution setting of DeiT (Touvron et al., 2020), except using fewer epochs. Notice that we keep patch size the same for both $224 \times 224$ and $384 \times 384$ images. So the input sequence length of Transformers becomes longer for higher resolutions. Table 1 shows that higher resolution improves the BEiT results by $1+$ points on ImageNet. More importantly, $\mathrm{BEiT}_{384}$ pretrained on ImageNet-1K even outperforms supervised pre-training $\mathrm{ViT}_{384}$ that uses ImageNet-22K, when they use the same input resolution. + +Scaling up to larger size. We further scale up BEiT to the large size (same as ViT-L). As shown in Table 1, $\mathrm{ViT}_{384}$ -L is worse than $\mathrm{ViT}_{384}$ on ImageNet, when training from scratch. The results verifies the data-hungry issue of vision Transformers. Supervised pre-training on ImageNet-22K partially relieves the issue, where $\mathrm{ViT}_{384}$ -L finally outperforms $\mathrm{ViT}_{384}$ by 1.2. In comparison, BEiT-L is better than BEiT by 2.0, and $\mathrm{BEiT}_{384}$ -L outperforms $\mathrm{BEiT}_{384}$ by 1.7. In other words, the benefits of scaling up BEiT from base to large are greater than supervised pre-training with ImageNet-22K. More importantly, comparing between $\mathrm{BEiT}_{384}$ with $\mathrm{ViT}_{384}$ that conducts supervised pre-training on ImageNet-22K, the improvements of BEiT become greater along with scaling the size from base + +
ModelsModel SizeResolutionImageNet
Training from scratch (i.e., random initialization)
ViT384-B (Dosovitskiy et al., 2020)86M384277.9
ViT384-L (Dosovitskiy et al., 2020)307M384276.5
DeiT-B (Touvron et al., 2020)86M224281.8
DeiT384-B (Touvron et al., 2020)86M384283.1
Supervised Pre-Training on ImageNet-22K (using labeled data)
ViT384-B (Dosovitskiy et al., 2020)86M384284.0
ViT384-L (Dosovitskiy et al., 2020)307M384285.2
Self-Supervised Pre-Training on ImageNet-1K (without labeled data)
iGPT-1.36B† (Chen et al., 2020a)1.36B224266.5
ViT384-B-JFT300M‡ (Dosovitskiy et al., 2020)86M384279.9
MoCo v3-B (Chen et al., 2021)86M224283.2
MoCo v3-L (Chen et al., 2021)307M224284.1
DINO-B (Caron et al., 2021)86M224282.8
BEiT-B (ours)86M224283.2
BEiT384-B (ours)86M384284.6
BEiT-L (ours)307M224285.2
BEiT384-L (ours)307M384286.3
+ +![](images/5577c666d7884fff04fd338c4c49a2b0bb02b78a9b3b86ab7142682d05ec8047.jpg) +Table 2: Convergence curves of training DeiT from scratch and fine-tuning BEiT on ImageNet-1K. + +Table 1: Top-1 accuracy on ImageNet-1K. We evaluate base- ("-B") and large-size ("-L") models at resolutions ${224} \times {224}$ and ${384} \times {384}$ . †: iGPT-1.36B contains 1.36 billion parameters, while others are base-size models. ‡: ViT ${}_{384}$ -B-JFT300M is pretrained with the "masked patch prediction" task on Google's in-house 300M images, while others use ImageNet. + +
ModelsADE20K
Supervised Pre-Training on ImageNet45.3
DINO (Caron et al., 2021)44.1
BEiT (ours)45.6
BEiT + Intermediate Fine-Tuning (ours)47.7
+ +Table 3: Results of semantic segmentation on ADE20K. We use SETR-PUP (Zheng et al., 2020) as the task layer and report results of single-scale inference. + +(i.e., 0.6) to large (i.e., 1.1). The results suggest that BEiT tends to help more for extremely larger models (such as 1B, or 10B), especially when labeled data are insufficient2 to conduct supervised pre-training3 for such large models. + +Convergence curves. Figure 2 compares the convergence curves of the training-from-scratch and pre-training-then-fine-tuning paradigms. We find that fine-tuning BEiT not only achieves better performance, but also converging much faster than training DeiT from scratch. Moreover, fine-tuning BEiT can reach reasonable numbers within very few epochs. + +
ModelsImageNetADE20K
BEiT (300 Epochs)82.8644.65
- Blockwise masking82.7742.93
- Visual tokens (i.e., recover masked pixels)81.0441.38
- Visual tokens - Blockwise masking80.5037.09
+ Recover 100% visual tokens82.5940.93
- Masking + Recover 100% visual tokens81.6736.73
Pretrain longer (800 epochs)83.1945.58
+ +Table 4: Ablation studies for BEiT pre-training on image classification and semantic segmentation. + +# 3.2 SEMANTIC SEGMENTATION + +Semantic segmentation aims to predict a corresponding class for each pixel of the input image. We evaluate BEiT on the ADE20K benchmark (Zhou et al., 2019) with 25K images and 150 semantic categories. We report the metric of mean Intersection of Union (mIoU) averaged over all semantic categories. As presented in Section 2.6, we directly follow the task layer and the most of hyperparameters described in SETR-PUP (Zheng et al., 2020). On ADE20K, we use Adam (Loshchilov & Hutter, 2019) as the optimizer. The learning rate is set to 1e-3 with layer-wise decay similar to image classification. We conduct fine-tuning for 160K steps. The batch size is 16. The detailed hyperparameters are described in Appendix I. + +As shown in Table 3, we compare BEiT with supervised pre-training that relies on labeled data of ImageNet. We find that our proposed method achieves better performance than supervised pretraining, although BEiT does not require manual annotations for pre-training. Moreover, we employ intermediate fine-tuning for BEiT on ImageNet, i.e., we first fine-tune pretrained BEiT on ImageNet, and then fine-tune the model on ADE20K. The results indicate that intermediate fine-tuning further improves BEiT on semantic segmentation. + +# 3.3 ABLATION STUDIES + +We conduct ablation studies to analyze the contributions of each component in BEiT. The models are evaluated on image classification (i.e., ImageNet) and semantic segmentation (i.e., ADE20K). We set the default pre-training steps to 300 epochs for the ablation studies, which is $37.5\%$ of the total steps used in the previous experiments. + +Table 4 reports the results of various model variants. First, we ablate blockwise masking by randomly sample masked positions. We find that blockwise masking is beneficial on both tasks, especially on semantic segmentation. Second, we ablate the usage of visual tokens by predicting the raw pixels of masked patches, i.e., the pre-training task becomes a pixel regression problem to recover masked patches. Our proposed masked image modeling task significantly outperforms naive pixel-level auto-encoding. Compared with the results in Table 1, the ablation result is worse than training vision Transformer from scratch on two tasks. The results indicate that the prediction of visual tokens is the key ingredient of BEiT. Third, we ablate the usage of visual tokens and blockwise masking together. We find that blockwise masking is even more helpful for pixel-level auto-encoding, which relieves the suffering of short-distance dependency. Forth, recovering all the visual tokens harms performance on downstream tasks. Fifth, we compare BEiT with different training steps. Pre-training the model longer can further improve performance on downstream tasks. + +# 3.4 ANALYSIS OF SELF-ATTENTION MAP + +We show that the self-attention mechanism in BEiT can separate objects, even though our pre-training does not rely on any manual annotation at all. Similar properties are also observed by Caron et al. (2021). The probing images are taken from the MS COCO (Lin et al., 2014) corpus to avoid appearing in the pre-training data. + +![](images/3a2d5229ef56ccd3e1d2cbf7669102808dc4db2c29e8de96d7b9aa89e6942539.jpg) +Figure 2: Self-attention map for different reference points. The self-attention mechanism in BEiT is able to separate objects, although self-supervised pre-training does not use manual annotations. + +As shown in Figure 2, we plot the self-attention map for different reference points within an image. The visualizations are produced by attention scores computed via query-key product in the last layer. For each reference point, we use the corresponding patch as query, and show which patch it attends to. After pre-training, BEiT learns to distinguish semantic regions using self-attention heads, without any task-specific supervision. The property partially indicates the reason why BEiT is able to help downstream tasks. Such knowledge acquired by BEiT potentially improves the generalization ability of fine-tuned models, especially on small-scale datasets. + +# 4 RELATED WORK + +Self-supervised visual representation learning. Various methods have been introduced over the years to pretrain vision models in a self-supervised manner. Pioneering works design clever pretext tasks, such as predicting the patch orderings (Noroozi & Favaro, 2016), colorization (Zhang et al., 2016), and predicting rotation angles (Komodakis & Gidaris, 2018). In addition, Trinh et al. (2019) propose to mask some patches within an image, and classify whether the masked patches are real or fake for each masked position. The method is similar to the masked version of Jigsaw pretraining (Noroozi & Favaro, 2016). The recent strand of research follows contrastive paradigm (Wu et al., 2018; Oord et al., 2018; Hjelm et al., 2019; Bachman et al., 2019; He et al., 2020; Chen et al., 2020b;c). The models typically regard various data augmentations as different views of an image, and then make the representations of positive pairs similar while pushing negative pairs away. In order to obtain enough informative negative samples in contrastive learning, the methods usually rely on large memory banks (Wu et al., 2018; He et al., 2020) or large batch size (Chen et al., 2020b). BYOL (Grill et al., 2020) and SimSiam (Chen & He, 2020) further eliminate the requirement of negative samples, using various techniques to avoid representation collapse. Another strand of methods use clustering to organize image examples (Caron et al., 2018; Asano et al., 2020; Caron et al., 2020; Li et al., 2021). + +Self-supervised vision Transformers. Pre-training vision Transformers has received significant attention recently due to the data-hungry issue. iGPT (Chen et al., 2020a) first creates a 9-bit color palette by k-means clustering RGB pixels, and then uses the clustered tokens to represent images. Next iGPT uses the tasks of BERT and GPT to pretrain Transformers. In comparison, our proposed method uses image patches as input without losing pixel-level information. Moreover, our visual tokens are obtained by discrete VAE instead of clustering. ViT (Dosovitskiy et al., 2020) conducts a preliminary exploration with the masked patch prediction task, which predicts the 3-bit mean color of the masked patches. Dosovitskiy et al. (2020) also report that pixel-level auto-encoding performs + +worse, although it is the most straightforward translation of BERT from NLP to CV. Rather than using heuristically designed pre-training tasks, our proposed model leverages visual tokens learned by discrete VAE, which not only achieves better performance but also is better theoretically motivated. Apart from masked auto-encoding, other mainstream research works use contrastive learning (Chen et al., 2021; Xie et al., 2021), and self-distillation (Caron et al., 2021). In comparison, BEiT can achieve several times of improvement in terms of pre-training throughput (Appendix E), and memory consumption. The advantages make BEiT appealing to scale up vision Transformers. + +# 5 CONCLUSION + +We introduce a self-supervised pre-training framework for vision Transformers, achieving strong fine-tuning results on downstream tasks, such as image classification, and semantic segmentation. We show that the proposed method is critical to make BERT-like pre-training (i.e., auto-encoding with masked input) work well for image Transformers. We also present the intriguing property of automatically acquired knowledge about semantic regions, without using any human-annotated data. In the future, we would like to scale up BEiT pre-training in terms of data size and model size. Moreover, we will conduct multimodal pre-training in a more unified way, using the similar objectives and the shared architecture for texts and images. + +# REFERENCES + +Yuki M. Asano, Christian Rupprecht, and Andrea Vedaldi. Self-labelling via simultaneous clustering and representation learning. In International Conference on Learning Representations (ICLR), 2020. +Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. +Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Songhao Piao, Ming Zhou, and Hsiao-Wuen Hon. UniLMv2: Pseudo-masked language models for unified language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, volume 119 of Proceedings of Machine Learning Research, pp. 642-652. PMLR, 2020. URL http://proceedings.mlr.press/v119/bao20a.html. +Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 132-149, 2018. +Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In Advances in Neural Information Processing Systems, volume 33, pp. 9912-9924. Curran Associates, Inc., 2020. +Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. arXiv preprint arXiv:2104.14294, 2021. +Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 1691-1703. PMLR, 13-18 Jul 2020a. URL http://proceedings.mlr.press/v119/chen20s.html. +Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. preprint arXiv:2002.05709, 2020b. +Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. preprint arXiv:2011.10566, 2020. + +Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. preprint arXiv:2003.04297, 2020c. +Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. ArXiv, abs/2104.02057, 2021. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186. Association for Computational Linguistics, 2019. +Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. preprint arXiv:2010.11929, 2020. +Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. Bootstrap your own latent: A new approach to self-supervised learning. In NeurIPS, 2020. +Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, 2020. +R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bklr3j0cKX. +Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. Deep networks with stochastic depth. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (eds.), Computer Vision - ECCV 2016, pp. 646-661, Cham, 2016. Springer International Publishing. ISBN 978-3-319-46493-0. +Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=rkE3y85ee. +Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77, 2020. doi: 10.1162/tacl_a_00300. URL https://www.aclweb.org/anthology/2020.tacl-1.5. +Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, 2014. +Nikos Komodakis and Spyros Gidaris. Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations (ICLR), 2018. +A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master's thesis, Department of Computer Science, University of Toronto, 2009. +Junnan Li, Pan Zhou, Caiming Xiong, and Steven Hoi. Prototypical contrastive learning of unsupervised representations. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=KmykpuSrjcq. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740-755. Springer, 2014. +Yahui Liu, Enver Sangineto, Wei Bi, Nicu Sebe, Bruno Lepri, and Marco De Nadai. Efficient training of visual transformers with small datasets. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021a. URL https://openreview.net/forum?id=SCN8UaetXx. + +Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin Transformer: Hierarchical vision transformer using shifted windows. arXiv preprint arXiv:2103.14030, 2021b. +Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7. +Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. In International Conference on Learning Representations, 2017. +Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European conference on computer vision, pp. 69-84. Springer, 2016. +Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. preprint arXiv:1807.03748, 2018. +Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman. Intermediate-task transfer learning with pretrained language models: When and why does it work? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, July 2020. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1-140:67, 2020. URL http://jmlr.org/papers/v21/20-074.html. +A. Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. ArXiv, abs/2102.12092, 2021. +Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with VQ-VAE-2. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. IJCV, 2015. +Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715-1725, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://www.aclweb.org/anthology/P16-1162. +Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. preprint arXiv:2012.12877, 2020. +Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, and Hervé Jégou. Going deeper with image transformers. arXiv preprint arXiv:2103.17239, 2021. +Trieu H Trinh, Minh-Thang Luong, and Quoc V Le. Selfie: Self-supervised pretraining for image embedding. arXiv preprint arXiv:1906.02940, 2019. +Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, pp. 6309-6318, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. + +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 5998-6008, 2017. +Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In CVPR, 2018. +Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In ECCV, 2018. +Zhenda Xie, Yutong Lin, Zhuliang Yao, Zheng Zhang, Qi Dai, Yue Cao, and Han Hu. Self-supervised learning with swin transformers. arXiv preprint arXiv:2105.04553, 2021. +Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers. arXiv preprint arXiv:2106.04560, 2021. +Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In ECCV, 2016. +Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip H. S. Torr, and Li Zhang. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. CoRR, abs/2012.15840, 2020. URL https://arxiv.org/abs/2012.15840. +Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ADE20K dataset. Int. J. Comput. Vis., 127(3): 302-321, 2019. doi: 10.1007/s11263-018-1140-0. URL https://doi.org/10.1007/s11263-018-1140-0. + +# A ARCHITECTURE VARIANTS OF VISION TRANSFORMER + +We use the standard vision Transformer (ViT; Dosovitskiy et al. 2020) in the experiments for fair comparisons. In addition, we find that LayerScale (Touvron et al., 2021) and relative position bias (Bao et al., 2020; Raffel et al., 2020) improve ViTs on downstream tasks. We employ the same setting as in Section 3.3 for ablation studies, which pretrains base-size models for 300 epochs on ImageNet-1K. + +As shown in Table 5, both LayerScale and relative position bias improve performance on ImageNet classification and ADE20K semantic segmentation. We denote the improved architecture as $\mathrm{BEiT}^+$ and use it for the experiments in Appendix B. We empirically notice that vanilla Transformer is the most stable when scaling up the model to billions of parameters, so we do not use LayerScale for extra-large models. + +
ArchitectureImageNetADE20K
ViT (used in this paper)82.8644.86
ViT+LayerScale83.0045.43
ViT+LayerScale+Relative Position Bias83.2245.70
+ +# B COMPARISON WITH LARGE-SCALE SUPERVISED PRE-TRAINING + +We compare with state-of-the-art supervised pre-training at scale. In addition to using ImageNet-1K for fair comparisons with previous work, we pretrain BEiT on ImageNet-22K to boost performance. We employ the architecture improvements (i.e., LayerScale, and relative position bias) as described in Appendix A, which is denoted as $\mathrm{BEIT}^+$ in Table 6 and Table 7. We follow the same pre-training setup as in Section 2.5, except we pretrain 150 epochs on ImageNet-22K. After self-supervised pre-training, we conduct intermediate fine-tuning on ImageNet-22K for 90 epochs. Moreover, we use an in-house dataset that has about 70M labeled images as a drop-in replacement of ImageNet-22K. + +Table 5: Ablation studies of architecture variants on image classification and semantic segmentation. For ADE20K, we use UperNet (Xiao et al., 2018) as the task layer, and report mIoU scores of single-scale inference. + +
ModelsModel SizeLabeled Data SizeImageNet 38425122
Supervised Pre-Training on ImageNet-22K (using labeled data)
ViT-B (Dosovitskiy et al., 2020)86M14M84.0-
ViT-L (Dosovitskiy et al., 2020)307M14M85.285.30
ViT-H (Dosovitskiy et al., 2020)632M14M85.1-
Supervised Pre-Training on Google JFT-300M (using labeled data)
ViT-B (Dosovitskiy et al., 2020)86M300M84.2-
ViT-L (Dosovitskiy et al., 2020)307M300M87.187.76
ViT-H (Dosovitskiy et al., 2020)632M300M88.088.55
Supervised Pre-Training on Google JFT-3B (using labeled data)
ViT-B (Zhai et al., 2021)86M3000M86.6-
ViT-L (Zhai et al., 2021)307M3000M88.5-
Self-Supervised Pre-Training, and Intermediate Fine-Tuning on ImageNet-22K
BEiT-B+ (ours)86M14M86.8-
BEiT-L+ (ours)307M14M88.488.6
Self-Supervised Pre-Training, and Intermediate Fine-Tuning on In-House-70M
BEiT-L+ (ours)307M70M89.389.5
+ +Table 6: Top-1 accuracy on ImageNet-1K fine-tuning. We evaluate models at resolutions $384^{2}$ and $512^{2}$ . + +Table 6 compares BEiT with previous state-of-the-art supervised pre-training (Dosovitskiy et al., 2020; Zhai et al., 2021) on ImageNet fine-tuning. Rather than heavily relying on extremely large-size labeled data (such as Google's in-house JFT-300M and JFT-3B), we demonstrate that BEiT pretraining can catch up with only ImageNet-22k (14M). Specifically, BEiT-L fine-tuned on ImageNet-22K achieves comparable performance with ViT-L trained on Google JFT-3B. Moreover, BEiT-L obtains $89.5\%$ top-1 accuracy on ImageNet after intermediate fine-tuning on an in-house 70M dataset. The results indicate that BEiT pre-training greatly reduces the required labeling efforts and advances the new state of the art for large-size vision Transformers. + +As shown in Table 7, we report the fine-tuning results on the ADE20K semantic segmentation benchmark. Following Swin (Liu et al., 2021b), we use the same task layer (i.e., UperNet; Xiao et al. 2018) and evaluate the models at the resolution $640 \times 640$ . The BEIT-L model obtains state-of-the-art performance on ADE20K. + +
ModelsmIoU (%)Multi-Scale mIoU (%)
Supervised Pre-Training on ImageNet-22K (using labeled data)
Swin-B (Liu et al., 2021b)50.051.7
Swin-L (Liu et al., 2021b)52.153.5
Self-Supervised Pre-Training, and Intermediate Fine-Tuning on ImageNet-22K
BEiT-B+ (ours)53.654.2
BEiT-L+ (ours)56.757.0
Self-Supervised Pre-Training, and Intermediate Fine-Tuning on In-House-70M
BEiT-L+ (ours)57.958.4
+ +# C ABLATION STUDIES OF IMAGETOKENIZER + +For comparison, we re-train the image tokenizer on ImageNet-1K. The reimplementation is based on https://github.com/lucidrains/DALLE-pytorch. We use the same codebook size 8K as in DALL-E (Ramesh et al., 2021). Then we plug the tokenizer into our pre-training process. We follow the same experimental setup of ablation studies as in Section 3.3. Table 8 shows that our reimplemented tokenizer obtains comparable reconstruction loss and ImageNet fine-tuning performance compared with the off-the-shelf DALL-E tokenizer. + +Table 7: Performance comparison on the ADE20K semantic segmentation. We follow Swin-L (Liu et al., 2021b) to use UperNet (Xiao et al., 2018) as the task layer and evaluate at resolution $640 \times 640$ . + +
Image TokenizerReconstruction ErrorImageNet
DALL-E Tokenizer (Ramesh et al., 2021)0.085682.86
Our reimplementation0.088082.70
+ +Table 8: Top-1 accuracy on ImageNet-1K using different image tokenizers during pre-training. For image reconstruction, we report mean absolute error of normalized RGB values. The reimplemented image tokenizer is trained on ImageNet-1K without labels. + +# D LINEAR PROBES ON IMAGENET + +We evaluate linear probes on ImageNet for various pretrained vision Transformers. We compare BEiT with two main strands of work, namely discriminative and generative self-supervised learning. The first one applies discriminative learning for pre-training, such as contrastive learning (Chen et al., 2021), and self distillation (Caron et al., 2021). The above methods typically learn to aggregate the image-level features into a global vector, which is relatively suitable for linear probing. In contrast, the second strand of methods, such as iGPT (Chen et al., 2020a) and ours, usually do not pretrain such global feature aggregation, which tends to make linear probes difficult. + +Following iGPT (Chen et al., 2020a), we use average pooling to aggregate the hidden states of each image patches, and add the probing layer at the middle layer of Transformer instead of always at the final layer. Similarly, we find that the best layer lies in 9-th layer for BEiT-B, and 14-th layer for BEiT-L. To be specific, we use AdamW (Loshchilov & Hutter, 2019) to update the linear probe layer for 50 epochs. The learning rate is 4e-3 with cosine decay. The batch size is 1024. The weight decay is set to 1e-4. We follow data augmentation used in DINO (Caron et al., 2021), which uses random resize crops and horizontal flips augmentation during training and evaluates on central crops. + +
ModelsModel SizeAccuracy
Discriminative self-supervised learning
DINO-B (Caron et al., 2021)86M78.2
MoCo v3-B (Chen et al., 2021)86M76.7
MoCo v3-L (Chen et al., 2021)307M77.6
Generative self-supervised learning
iGPT-L (Chen et al., 2020a)1362M65.2
iGPT-XL (Chen et al., 2020a)6801M68.7
iGPT-XL (Chen et al., 2020a)6801M72.0*
BEiT-B (ours)86M56.7
BEiT-L (ours)307M73.5
+ +As shown in Table 9, we evaluate linear probes on ImageNet-1K for self-supervised learning. Overall, discriminative methods perform better than generative pre-training on linear probing. Linear probes keep the Transformer parameters fixed and only update the linear layer. So the pre-training of global aggregation of image-level features is beneficial to linear probing in DINO and MoCo v3, although full fine-tuning eliminates the gap. Moreover, the results indicate that increasing the model size from base (86M) to large (304M) significantly improves accuracy for our proposed method. In contrast, the gap between base- and large-size MoCo v3 is smaller. We also find that BEiT outperforms iGPT by a large margin even using much fewer parameters. + +# E MULTI-TASK PRE-TRAINING WITH DINO + +We train the pre-training tasks of BEiT and DINO (Caron et al., 2021) together in a multi-task manner. As shown in Table 10, augmenting masked image modeling with DINO improves semantic segmentation on ADE20K, and obtains comparable results on ImageNet classification. Moreover, BEiT is more efficient in terms of pre-training speed, as DINO has two copies of Transformer parameters for self-distillation and multi-crop augmentation (Caron et al., 2020). For the throughput comparisons between BEiT and BEiT+DINO, we set batch size to the same. Because BEiT is also more memory-efficient, we can use larger batch size to fully utilize GPU cards, which obtains greater speedup in practice than the reported numbers. + +Table 9: Linear probing accuracy on ImageNet. " $*$ " denotes that iGPT-XL uses concatenation of five layers for linear probing, while others use the features of single layer. + +
ModelsImageNetADE20KPre-Training Throughput
DINO (400 Epochs)82.844.08-
BEiT (300 Epochs)82.944.654.2x
BEiT + DINO (300 Epochs)82.946.851.0x
+ +Table 10: We train the pre-training tasks of BEiT and DINO (Caron et al., 2021) in the way of multi-task learning. We report the performance by fine-tuning on ImageNet-1K image classification and ADE20K semantic segmentation. For ADE20K, we use SETR-PUP (Zheng et al., 2020) as the task layer and report the mIoU score of single-scale inference. The pre-training throughput measures the speed, where larger numbers indicate faster pre-training. + +# F IMAGE CLASSIFICATION ON CIFAR-100 + +In addition to ImageNet classification, we conduct fine-tuning experiments on the CIFAR-100 (Krizhevsky & Hinton, 2009) benchmark with 100 classes and 60k images. The experimental setup is the same as in Section 3.1. + +Table 11 reports the top-1 accuracy on CIFAR-100. Notably, on the smaller CIFAR-100 dataset, ViT trained from scratch only reaches $48.5\%$ accuracy (Chen et al., 2021). In comparison, BEiT achieves $90.1\%$ with the help of pre-training. The results indicate that BEiT can greatly reduce the requirement of annotation efforts. BEiT also outperforms MoCo v3. Moreover, intermediate fine-tuning on ImageNet-1K further improves the results on CIFAR-100. + +
ModelsCIFAR-100
Training from scratch (i.e., random initialization)
ViT384 (Dosovitskiy et al., 2020)48.5*
Supervised Pre-Training on ImageNet-1K (using labeled data)
ViT384 (Dosovitskiy et al., 2020)87.1
DeiT (Touvron et al., 2020)90.8
Self-Supervised Pre-Training on ImageNet-1K (without labeled data)
DINO (Caron et al., 2021)91.7
MoCo v3 (Chen et al., 2021)87.1
BEiT (ours)90.1
Self-Supervised Pre-Training, and Intermediate Fine-Tuning on ImageNet-1K
BEiT (ours)91.8
+ +# G HYPERPARAMETERS FOR PRE-TRAINING + +Table 11: Top-1 accuracy of image classification on CIFAR-100. The models are at resolution $224 \times 224$ , except $\mathrm{ViT}_{384}$ uses $384 \times 384$ . The results, unless otherwise indicated, are all obtained by base-size models. \*: result is taken from (Chen et al., 2021). + +
HyperparametersBase SizeLarge Size
Layers1224
Hidden size7681024
FFN inner hidden size30724096
Attention heads1216
Attention head size64
Patch size16 × 16
Training epochs800
Batch size2048
Adam ε1e-8
Adam β(0.9, 0.999)
Peak learning rate1.5e-3
Minimal learning rate1e-5
Learning rate scheduleCosine
Warmup epochs10
Gradient clipping3.01.0
Dropoutx
Stoch. depth0.1
Weight decay0.05
Data AugmentRandomResizeAndCrop
Input resolution224 × 224
Color jitter0.4
+ +Table 12: Hyperparameters for pre-training BEiT on ImageNet-1K. + +H HYPERPARAMETERS FOR IMAGE CLASSIFICATION FINE-TUNING + +
HyperparametersCIFAR-100 +Base SizeImageNet-1K +Base SizeLarge Size
Peak learning rate{2e-3, 3e-3, 4e-3, 5e-3}
Fine-tuning epochs15010050
Batch size51210241024
Warmup epochs20205
Layer-wise learning rate decay0.650.650.75
Adam ε1e-8
Adam β(0.9, 0.999)
Minimal learning rate1e-6
Learning rate scheduleCosine
Repeated Aug×
Weight decay0.30.050.05
Label smoothing ε0.1
Stoch. depth0.1
Dropout×
Gradient clipping×
Erasing prob.×0.250.25
Input resolution224 × 224
Rand Augment9/0.5
Mixup prob.0.8
Cutmix prob.1.0
+ +Table 13: Hyperparameters for fine-tuning BEiT on ImageNet-1K and CIFAR-100. +I HYPERPARAMETERS FOR ADE20K SEMANTIC SEGMENTATION FINE-TUNING + +
HyperparametersBase Size
Peak learning rate1e-3
Fine-tuning steps160K
Batch size16
Adam ε1e-8
Adam β(0.9, 0.999)
Layer-wise learning rate decay0.65
Minimal learning rate0
Learning rate scheduleLinear
Warmup steps1500
Dropoutx
Stoch. depth0.1
Weight decay0.05
Input resolution512 × 512
Position embedding interpolatebilinear
+ +Table 14: Hyperparameters for fine-tuning BEiT on ADE20K. \ No newline at end of file diff --git a/beitbertpretrainingofimagetransformers/images.zip b/beitbertpretrainingofimagetransformers/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1dd30457d1ee4d6b1246f6567ebe85adc8703600 --- /dev/null +++ b/beitbertpretrainingofimagetransformers/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f0c34b69b4922a276fd6e271290fb5fb0bff9a9eba72c93201d066b466973c6 +size 863510 diff --git a/beitbertpretrainingofimagetransformers/layout.json b/beitbertpretrainingofimagetransformers/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a2dbc423261b387e49e2058b017965a17b4b8b6e --- /dev/null +++ b/beitbertpretrainingofimagetransformers/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2560c5d57e79a6a82eb77567799651d94963170f10103ea8bae988ab18ffc269 +size 479521 diff --git a/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_content_list.json b/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0e54cc29a4d1e79735829e9621d8957ab9934d27 --- /dev/null +++ b/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8ec5fa6167700be32fce3a0bed653d9853a526afceb546b722eb0e7bb1cc5fb +size 226730 diff --git a/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_model.json b/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f2979b6d4558fb710e4fd09d3f1705a7ae553591 --- /dev/null +++ b/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:90e86b7d24733f1662d4eebf8e68d46cc3b301f8085413a06e3452301dc440c4 +size 265742 diff --git a/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_origin.pdf b/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7d0adcf6900f497eaf87969ef15094d2fd9d11b6 --- /dev/null +++ b/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf98d858bbc75802393b3d76fdf70e7e99a1acbd956c53eb591b96512562b99b +size 6576334 diff --git a/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/full.md b/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/full.md new file mode 100644 index 0000000000000000000000000000000000000000..c59b1d0f0fb2bbe6c761a27f531039b6236ffa7c --- /dev/null +++ b/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/full.md @@ -0,0 +1,964 @@ +# $\beta$ -INTACT-VAE: IDENTIFYING AND ESTIMATING CAUSAL EFFECTS UNDER LIMITED OVERLAP + +Pengzhou (Abel) Wu & Kenji Fukumizu + +Department of Statistical Science, The Graduate University for Advanced Studies & The Institute of Statistical Mathematics + +Tachikawa, Tokyo + +{wu.pengzhou, fukumizu}@ism.ac.jp + +# ABSTRACT + +As an important problem in causal inference, we discuss the identification and estimation of treatment effects (TEs) under limited overlap; that is, when subjects with certain features belong to a single treatment group. We use a latent variable to model a prognostic score which is widely used in biostatistics and sufficient for TEs; i.e., we build a generative prognostic model. We prove that the latent variable recovers a prognostic score, and the model identifies individualized treatment effects. The model is then learned as $\beta$ -Intact-VAE—a new type of variational autoencoder (VAE). We derive the TE error bounds that enable representations balanced for treatment groups conditioned on individualized features. The proposed method is compared with recent methods using (semi-)synthetic datasets. + +# 1 INTRODUCTION + +Causal inference (Imbens & Rubin, 2015; Pearl, 2009), i.e., inferring causal effects of interventions, is a fundamental field of research. In this work, we focus on treatment effects (TEs) based on a set of observations comprising binary labels $T$ for treatment/control (non-treated), outcome $Y$ , and other covariates $X$ . Typical examples include estimating the effects of public policies or new drugs based on the personal records of the subjects. The fundamental difficulty of causal inference is that we never observe counterfactual outcomes that would have been if we had made the other decision (treatment or control). While randomized controlled trials (RCTs) control biases through randomization and are ideal protocols for causal inference, they often have ethical and practical issues, or suffer from expensive costs. Thus, causal inference from observational data is important. + +Causal inference from observational data has other challenges as well. One is confounding: there may be variables, called confounders, that causally affect both the treatment and the outcome, and spurious correlation/bias follows. The other is the systematic imbalance (difference) of the distributions of the covariates between the treatment and control groups—that is, $X$ depends on $T$ , which introduces bias in estimation. A majority of studies on causal inference, including the current work, have relied on unconfoundedness; this means that the confounding can be controlled by conditioning on the covariates. The more covariates are collected the more likely unconfoundedness holds; however, more covariates tend to introduce a stronger imbalance between treatment and control. + +The current work studies the issue of imbalance in estimating individualized TEs conditioned on $X$ . Classical approaches aim for covariate balance, $X$ independent of $T$ , by matching and re-weighting (Stuart, 2010; Rosenbaum, 2020). Machine learning methods have also been exploited; there are semi-parametric methods—e.g., Van der Laan & Rose (2018, TMLE)—which improve finite sample performance, as well as non-parametric methods—e.g., Wager & Athey (2018, CF). Notably, from Johansson et al. (2016), there has been a recent increase in interest in balanced representation learning (BRL) to learn representations $Z$ of the covariates, such that $Z$ is independent of $T$ . + +The most serious form of imbalance is the limited (or weak) overlap of covariates, which means that sample points with certain covariate values belong to a single treatment group. In this case, a straightforward estimation of TEs is not possible at non-overlapping covariate values due to lack of data. There are works that provide robustness to limited overlap (Armstrong & Kolesár, 2021), trim non-overlapping data points (Yang & Ding, 2018), weight data points by overlap (Li & Li, 2019), or study convergence rates depending on overlap (Hong et al., 2020). Limited overlap is particularly relevant to machine learning methods that exploit high-dimensional covariates. This is because, with higher-dimensional covariates, overlap is harder to satisfy and verify (D'Amour et al., 2020). + +To address imbalance and limited overlap, we use a prognostic score (Hansen, 2008); it is a sufficient statistic of outcome predictors and is among the key concepts of sufficient scores for TE estimation. As a function of covariates, it can map some non-overlapping values to an overlapping value in a space of lower-dimensions. For individualized TEs, we consider conditionally balanced representation $Z$ , such that $Z$ is independent of $T$ given $X$ —which, as we will see, is a necessary condition for a balanced prognostic score. Moreover, prognostic score modeling can benefit from methods in predictive analytics and exploit rich literature, particularly in medicine and health (Hajage et al., 2017). Thus, it is promising to combine the predictive power of prognostic modeling and machine learning. With this idea, our method builds on a generative prognostic model that models the prognostic score as a latent variable and factorizes to the score distribution and outcome distribution. + +As we consider latent variables and causal inference, identification is an issue that must be discussed before estimation is considered. "Identification" means that the parameters of interest (in our case, representation function and TEs) are uniquely determined and expressed using the true observational distribution. Without identification, a consistent estimator is impossible to obtain, and a model would fail silently; in other words, the model may fit perfectly but will return an estimator that converges to a wrong one, or does not converge at all (Lewbel, 2019, particularly Sec. 8). Identification is even more important for causal inference; because, unlike usual (non-causal) model misspecification, causal assumptions are often unverifiable through observables (White & Chalak, 2013). Thus, it is critical to specify the theoretical conditions for identification, and then the applicability of the methods can be judged by knowledge of an application domain. + +A major strength of our generative model is that the latent variable is identifiable. This is because the factorization of our model is naturally realized as a combination of identifiable VAE (Khemakhem et al., 2020a, iVAE) and conditional VAE (Sohn et al., 2015, CVAE). Based on model identifiability, we develop two identification results for individualized TEs under limited overlap. A similar VAE architecture was proposed in Wu & Fukumizu (2020b); the current study is different in setting, theory, learning objective, and experiments. The previous work studies unobserved confounding but not limited overlap, with different set of assumptions and identification theories. The current study further provides bounds on individualized TE error, and the bounds justify a conditionally balancing term controlled by hyperparameter $\beta$ , as an interpolation between the two identifications. + +In summary, we study the identification (Sec. 3) and estimation (Sec. 4) of individualized TEs under limited overlap. Our approach is based on recovering prognostic scores from observed variables. To this end, our method exploits recent advances in identifiable representation—particularly iVAE. The code is in Supplementary Material, and the proofs are in Sec. A. Our main contributions are: + +1) TE identification under limited overlap of $X$ , via prognostic scores and an identifiable model; +2) bounds on individualized TE error, which justify our conditional BRL; +3) a new regularized VAE, $\beta$ -Intact-VAE, realizing the identification and conditional balance; +4) experimental comparison to the state-of-the-art methods on (semi-)synthetic datasets. + +# 1.1 RELATED WORK + +Limited overlap. Under limited overlap, Luo et al. (2017) estimate the average TE (ATE) by reducing covariates to a linear prognostic score. Farrell (2015) estimates a constant TE under a partial linear outcome model. D'Amour & Franks (2021) study the identification of ATE by a general class of scores, given the (linear) propensity score and prognostic score. Machine learning studies on this topic have focused on finding overlapping regions (Oberst et al., 2020; Dai & Stultz, 2020), or indicating possible failure under limited overlap (Jesson et al., 2020), but not remedies. An exception is Johansson et al. (2020), which provides bounds under limited overlap. To the best of our knowledge, our method is the first machine learning method that provides identification under limited overlap. + +Prognostic scores have been recently combined with machine learning approaches, mainly in the biostatistics community. For example, Huang & Chan (2017) estimate individualized TE by reducing covariates to a linear score which is a joint propensity-prognostic score. Tarr & Imai (2021) use SVM to minimize the worst-case bias due to prognostic score imbalance. However, in the machine learning community, few methods consider prognostic scores; Zhang et al. (2020a) and Hassanpour & Greiner (2019) learn outcome predictors, without mentioning prognostic score—while Johansson et al. (2020) conceptually, but not formally, connects BRL to prognostic score. Our work is the first to formally connect generative learning and prognostic scores for TE estimation. + +Identifiable representation. Recently, independent component analysis (ICA) and representation learning—both ill-posed inverse problems—meet together to yield nonlinear ICA and identifiable representation; for example, using VAEs (Khemakhem et al., 2020a), and energy models (Khemakhem et al., 2020b). The results are exploited in causal discovery (Wu & Fukumizu, 2020a) and out-of-distribution (OOD) generalization (Sun et al., 2020). This study is the first to explore identifiable representations in TE identification. + +BRL and related methods amount to a major direction. Early BRL methods include BLR/BNN (Johansson et al., 2016) and TARnet/CFR (Shalit et al., 2017). In addition, Yao et al. (2018) exploit the local similarity between data points. Shi et al. (2019) use similar architecture to TARnet, considering the importance of treatment probability. There are also methods that use GAN (Yoon et al., 2018, GANITE) and Gaussian processes (Alaa & van der Schaar, 2017). Our method shares the idea of BRL, and further extends to conditional balance—which is natural for individualized TE. + +More. Our work lays conceptual and theoretical foundations of VAE methods for TEs (e.g., CEVAE Louizos et al., 2017; Lu et al., 2020). See Sec. D for more related works, there we also make detailed comparisons to CFR and CEVAE, which are well-known machine learning methods. + +# 2 SETUP AND PRELIMINARIES + +# 2.1 COUNTERFACTUALS, TREATMENT EFFECTS, AND IDENTIFICATION + +Following Imbens & Rubin (2015), we assume there exist potential outcomes $Y(t) \in \mathbb{R}^d, t \in \{0,1\}$ . $Y(t)$ is the outcome that would have been observed if the treatment value $T = t$ was applied. We see $Y(t)$ as the hidden variables that give the factual outcome $Y$ under factual assignment $T = t$ . Formally, $Y(t)$ is defined by the consistency of counterfactuals: $Y = Y(t)$ if $T = t$ ; or simply $Y = Y(T)$ . The fundamental problem of causal inference is that, for a unit under research, we can observe only one of $Y(0)$ or $Y(1)$ —w.r.t. the treatment value applied. That is, "factual" refers to $Y$ or $T$ , which is observable; or estimators built on the observables. We also observe relevant covariate(s) $X \in \mathcal{X} \subseteq \mathbb{R}^m$ , which is associated with individuals, with distribution $\mathcal{D} := (X,Y,T) \sim p(\boldsymbol{x},\boldsymbol{y},t)$ . We use upper-case (e.g. $T$ ) to denote random variables, and lower-case (e.g. $t$ ) for realizations. + +The expected potential outcome is denoted by $\mu_t(\pmb{x}) = \mathbb{E}(Y(t)|X = \pmb{x})$ conditioned on $X = \pmb{x}$ . The estimands in this work are the conditional ATE (CATE) and ATE, defined, respectively, by: + +$$ +\tau (\boldsymbol {x}) = \mu_ {1} (\boldsymbol {x}) - \mu_ {0} (\boldsymbol {x}), \quad \nu = \mathbb {E} (\tau (X)). \tag {1} +$$ + +CATE is seen as an individual-level, personalized, treatment effect, given highly discriminative $X$ . + +Standard results (Rubin, 2005)(Hernan & Robins, 2020, Ch. 3) show sufficient conditions for TE identification in general settings. They are Exchangeability: $Y(t) \bot T|X$ , and Overlap: $p(t|\pmb{x}) > 0$ for any $\pmb{x} \in \mathcal{X}$ . Both are required for $t \in \{0,1\}$ . When $t$ appears in statements without quantification, we always mean "for both $t$ ". Often, Consistency is also listed; however, as mentioned, it is better known as the well-definedness of counterfactuals. Exchangeability means, just as in RCTs, but additionally given $X$ , that there is no correlation between factual $T$ and potential $Y(t)$ . Note that the popular assumption $Y(0), Y(1) \bot T|X$ is stronger than $Y(t) \bot T|X$ and is not necessary for identification (Hernan & Robins, 2020, pp. 15). Overlap means that the supports of $p(\pmb{x}|t = 0)$ and $p(\pmb{x}|t = 1)$ should be the same, and this ensures that there are data for $\mu_t(\pmb{x})$ on any $(\pmb{x},t)$ . + +We rely on consistency and exchangeability, but in Sec. 3.2, will relax the condition of the overlapping covariate to allow some non-overlapping values $x$ — that is, covariate $X$ is limited-overlapping. In this paper, we also discuss overlapping variables other than $X$ (e.g., prognostic scores), and provide a definition for any random variable $V$ with support $\mathcal{V}$ as follows: + +Definition 1. $V$ is Overlapping if $p(t|V = \boldsymbol{v}) > 0$ for any $t \in \{0,1\}, \boldsymbol{v} \in \mathcal{V}$ . If the condition is violated at some value $\boldsymbol{v}$ , then $\boldsymbol{v}$ is non-overlapping and $V$ is limited-overlapping. + +# 2.2 PROGNOSTIC SCORES + +Our method aims to recover a prognostic score (Hansen, 2008), adapted to account for both $t$ as in Definition 2. On the other hand, balancing scores (Rosenbaum & Rubin, 1983) $b(X)$ are defined by $T \bot X | b(X)$ , of which the propensity score $p(t = 1|X)$ is a special case. See Sec. B.1 for detail. + +Definition 2. A $PGS$ is $\{\mathfrak{p}(X,t)\}_{t\in \{0,1\}}$ such that $Y(t)\perp X|\mathfrak{p}(X,t)$ , where $\mathfrak{p}(\boldsymbol {x},t)(\mathfrak{p}_t(\boldsymbol {x})$ hereafter) is a function defined on $\mathcal{X}\times \{0,1\}$ . A $PGS$ is called balanced (and a bPGS) if $\mathfrak{p}_0 = \mathfrak{p}_1$ . + +We say a PGS is overlapping, if both $\mathfrak{p}_0(X)$ and $\mathfrak{p}_1(X)$ are overlapping. Obviously, a bPGS $\mathfrak{p}(X)$ is a conditionally balanced representation (defined as $Z\perp T|X$ in Introduction) and is thus named. We often write $t$ of the function argument in subscripts. + +We use bPGS or PGS to construct representations for CATE estimation. Why not balancing scores? While balancing scores $\pmb{b}(X)$ have been widely used in causal inference, PGSs are more suitable for discussing overlap. Our purpose is to recover an overlapping score for limited-overlapping $X$ . It is known that overlapping $\pmb{b}(X)$ implies overlapping $X$ (D'Amour et al., 2020), which counters our purpose. In contrast, overlapping bPGS does not imply overlapping $\pmb{b}(X)$ . Example. Let $T = \mathbb{I}(X + \epsilon > 0)$ and $Y = \pmb{f}(|X|, T) + \pmb{\mathrm{e}}$ , where $\mathbb{I}$ is the indicator function, $\epsilon$ and $\pmb{\mathrm{e}}$ are exogenous zero-mean noises, and the support of $X$ is on the entire real line while $\epsilon$ is bounded. Now, $X$ itself is a balancing score and $|X|$ is a bPGS; and $|X|$ is overlapping but $X$ is not. Moreover, with theoretical and experimental evidence, it is recently conjectured that PGSs maximize overlap among a class of sufficient scores, including $\pmb{b}(X)$ (D'Amour & Franks, 2021). In general, Hajage et al. (2017) show that prognostic score methods perform better—or as well as—propensity score methods. + +Below is a corollary of Proposition 5 in Hansen (2008); note that $\mathfrak{p}_t(X)$ satisfies exchangeability. + +Proposition 1 (Identification via PGS). If $\mathfrak{p}_t(X)$ is a PGS and $Y|\mathfrak{p}_{\hat{t}}(X), T \sim p_{Y|\mathfrak{p}_{\hat{t}},T}(\boldsymbol{y}|P,t)$ where $\hat{t} \in \{0,1\}$ is a counterfactual assignment, then CATE and ATE are identified, using (1) and + +$$ +\mu_ {\hat {t}} (\boldsymbol {x}) = \mathbb {E} (Y (\hat {t}) | \mathfrak {p} _ {\hat {t}} (X), X = \boldsymbol {x}) = \mathbb {E} (Y | \mathfrak {p} _ {\hat {t}} (\boldsymbol {x}), T = \hat {t}) = \int p _ {Y | \mathfrak {p} _ {\hat {t}}, T} (\boldsymbol {y} | \mathfrak {p} _ {\hat {t}} (\boldsymbol {x}), \hat {t}) \boldsymbol {y} d \boldsymbol {y} \tag {2} +$$ + +With the knowledge of $\mathfrak{p}_t$ and $p_{Y|\mathfrak{p}_{\hat{t}},T}$ , we choose one of $\mathfrak{p}_0, \mathfrak{p}_1$ and set $t = \hat{t}$ in the density function, w.r.t the $\mu_{\hat{t}}$ of interest. This counterfactual assignment resolves the problem of non-overlap at $\boldsymbol{x}$ . Note that a sample point with $X = \boldsymbol{x}$ may not have $T = \hat{t}$ . + +We consider additive noise models for $Y(t)$ , which ensures the existence of PGSs. + +$(\mathbf{G1})^{1}$ (Additive noise model) the data generating process (DGP) for $Y$ is $Y = \pmb {f}^{*}(\mathfrak{m}(X,T),T) + \mathfrak{e}$ where $\pmb {f}^*$ , m are functions and e is a zero-mean exogenous (external) noise. + +The DGP is causal and defines potential outcomes by $Y(t) \coloneqq \pmb{f}_t^* (\mathfrak{m}_t(X)) + \mathbf{e}$ , and specifies $\mathfrak{m}(X,T)$ , $T$ , and $\mathbf{e}$ as the only direct causes of $Y$ . Particularly, $\mathfrak{m}_t(X)$ is a sufficient statistics of $X$ for $Y(t)$ . For example, 1) $\mathfrak{m}_t(X)$ can be the component(s) of $X$ that affect $Y(t)$ directly, or 2) if $Y(t)|X$ follows a generalized linear model, then $\mathfrak{m}_t(X)$ can be the linear predictor of $Y(t)$ . + +Under (G1), 1) $\mathfrak{m}_t(X)$ is a PGS; 2) $\mu_t(X) = \pmb{f}_t^* (\mathfrak{m}_t(X))$ is a PGS; 3) $X$ is a (trivial) bPGS; and 4) $\mathfrak{u}(X)\coloneqq (\mu_0(X),\mu_1(X))$ is a bPGS. The essence of our method is to recover the PGS $\mathfrak{m}_t(X)$ as a representation, assuming $\mathfrak{m}_t(X)$ is not higher-dimensional than $Y$ and approximately balanced. Note that $\mu_t(X)$ , our final target, is a low-dimensional PGS but not balanced, and we estimate it conditioning on the approximate bPGS $\mathfrak{m}_t(X)$ . + +# 3 IDENTIFICATION UNDER GENERATIVE PROGNOSTIC MODEL + +In Sec. 3.1, we specify the generative prognostic model $p(\pmb{y}, \pmb{z} | \pmb{x}, t)$ , and show its identifiability. In Sec. 3.2, we prove the identification of CATEs, which is one of our main contributions. The theoretical analysis involves only our generative model (i.e., prior and decoder), but not the encoder. The encoder is not part of the generative model and is involved as an approximate posterior in the estimation, which is studied in Sec. 4. + +# 3.1 MODEL, ARCHITECTURE, AND IDENTIFIABILITY + +Our goal is to build a model that can be learned by VAE from observational data to obtain a PGS, or better, a bPGS, via the latent variable $Z$ . The generative prognostic model of the proposed method is in (3), + +![](images/ddb4613cdb09b8e58cd815c57915fbc522833e7225370dc1068e2b1fe81b0ba4.jpg) +Figure 1: CVAE, iVAE, and Intact-VAE: Graphical models of the decoders. + +where $\pmb{\theta} \coloneqq (\pmb{f},\pmb{h},\pmb{k})$ contains the functional parameters. The first factor $p_{\pmb{f}}(\pmb{y}|\pmb{z},t)$ , our decoder, models $p_{Y|\mathfrak{p}_t,T}(\pmb{y}|P,t)$ in (2) and is an additive noise model, with $\epsilon \sim p_{\epsilon}$ as the exogenous noise. The second factor $p_{\lambda}(\pmb{z}|\pmb{x},t)$ , our conditional prior, models $\mathfrak{p}_T(X)$ and is a factorized Gaussian, with $\lambda_T(X) \coloneqq \mathrm{diag}^{-1}(\pmb{k}_T(X))(\pmb{h}_T(X), -\frac{1}{2})^T$ as its natural parameter in the exponential family, where $\mathrm{diag}()$ gives a diagonal matrix from a vector. + +$$ +p _ {\boldsymbol {\theta}} (\boldsymbol {y}, \boldsymbol {z} | \boldsymbol {x}, t) = p _ {\boldsymbol {f}} (\boldsymbol {y} | \boldsymbol {z}, t) p _ {\boldsymbol {\lambda}} (\boldsymbol {z} | \boldsymbol {x}, t), +$$ + +$$ +\begin{array}{l} p _ {\boldsymbol {\theta}} (\boldsymbol {y}, \boldsymbol {z} | \boldsymbol {\omega}, t) = p _ {\boldsymbol {f}} (\boldsymbol {y} | \boldsymbol {z}, t) p _ {\boldsymbol {\lambda}} (\boldsymbol {z} | \boldsymbol {\omega}, t), \\ p _ {\boldsymbol {f}} (\boldsymbol {y} | \boldsymbol {z}, t) = p _ {\epsilon} (\boldsymbol {y} - \boldsymbol {f} _ {t} (\boldsymbol {z})), \quad p _ {\boldsymbol {\lambda}} (\boldsymbol {z} | \boldsymbol {x}, t) \sim \mathcal {N} (\boldsymbol {z}; \boldsymbol {h} _ {t} (\boldsymbol {x}), \operatorname {d i a g} (\boldsymbol {k} _ {t} (\boldsymbol {x}))). \end{array} \tag {3} +$$ + +We denote $n \coloneqq \dim (Z)$ . For inference, the ELBO is given by the standard variational lower bound + +$$ +\log p (\boldsymbol {y} \mid \boldsymbol {x}, t) \geq \mathbb {E} _ {\boldsymbol {z} \sim q} \log p _ {\boldsymbol {f}} (\boldsymbol {y} \mid \boldsymbol {z}, t) - D _ {\mathrm {K L}} (q (\boldsymbol {z} \mid \boldsymbol {x}, \boldsymbol {y}, t) \| p _ {\boldsymbol {\lambda}} (\boldsymbol {z} \mid \boldsymbol {x}, t)). \tag {4} +$$ + +Note that the encoder $q$ conditions on all the observables $(X,Y,T)$ ; this fact plays an important role in Sec. 4.1. Full parameterization of the encoder and decoder is also given in Sec. 4.1. This architecture is called Intact-VAE (Identifiable treatment-conditional VAE). See Figure 1 for comparison in terms of graphical models (which have not causal implications here). See Sec. C.2 for more expositions and Sec. B.2 for basics of VAEs. + +Our model identifiability extends the theory of iVAE, and the following conditions are inherited. + +(M1) i) $\pmb{f}_t$ is injective, and ii) $\pmb{f}_t$ is differentiable. +(D1) $\lambda_t(X)$ is non-degenerate, i.e., the linear hull of its support is $2n$ -dimensional. + +Under (M1) and (D1), we obtain the following identifiability of the parameters in the model: if $p_{\theta}(\pmb{y}|\pmb{x},t) = p_{\theta'}(\pmb{y}|\pmb{x},t)$ , we have, for any $\pmb{y}_t$ in the image of $\pmb{f}_t$ : + +$$ +\boldsymbol {f} _ {t} ^ {- 1} \left(\boldsymbol {y} _ {t}\right) = \operatorname {d i a g} (\boldsymbol {a}) \boldsymbol {f} _ {t} ^ {\prime - 1} \left(\boldsymbol {y} _ {t}\right) + \boldsymbol {b} =: \mathcal {A} _ {t} \left(\boldsymbol {f} _ {t} ^ {\prime - 1} \left(\boldsymbol {y} _ {t}\right)\right) \tag {5} +$$ + +where $\mathrm{diag}(\pmb{a})$ is an invertible $n$ -diagonal matrix and $\pmb{b}$ is an $n$ -vector, both of which depend on $\lambda_t(\pmb{x})$ and $\lambda_t^\prime (\pmb{x})$ . The essence of the result is that $\pmb{f}_t^\prime = \pmb{f}_t\circ \mathcal{A}_t$ ; that is, $\pmb{f}_t$ can be identified (learned) up to an affine transformation $\mathcal{A}_t$ . See Sec. A for the proof and a relaxation of (D1). In this paper, symbol 'prime) always indicates another parameter (variable, etc.): $\theta^{\prime} = (\pmb{f}^{\prime},\pmb{\lambda}^{\prime})$ + +# 3.2 IDENTIFICATIONS UNDER LIMITED-OVERLAPPING COVARIATE + +In this subsection, we present two results of CATE identification based on the recovery of equivalent bPGS and PGS, respectively. Since PGSs are functions of $X$ , the theory assumes a noiseless prior for simplicity, i.e., $k(X) = 0$ ; the prior $Z_{\lambda,t} \sim p_{\lambda}(z|x,t)$ degenerates to function $h_t(X)$ . + +PGSs with dimensionality lower than or equal to $d = \dim(Y)$ are essential to address limited overlapping, as shown below. We set $n = d$ because $\mu_t$ is a PGS of the same dimension as $Y$ under (G1). In practice, $n = d$ means that we seek a low-dimensional representation of $X$ . We introduce + +(G1') (Low-dimensional PGS) (G1) is true, and $\mu_t = j_t \circ \mathfrak{p}_t$ for some $\mathfrak{p}_t$ and injective $j_t$ , + +which is equivalent to (G1) because $\mu_t = j_t \circ \mathfrak{p}_t$ is trivially satisfied with $j_t$ is identity and $\mathfrak{p}_t = \mu_t$ . (G1') is used instead in this subsection. First, it explicitly restricts $\dim(\mathfrak{p}_t)$ via injectivity, which ensures that $n = \dim(Y) \geq \dim(\mathfrak{p}_t)$ . Second, it reminds us that, possibly, the decomposition is not unique; and, clearly, all $\mathfrak{p}_t$ that satisfy (G1') are PGSs. For example, if $f_t^*$ is injective, then $j_t = f_t^*$ and $\mathfrak{p}_t = \mathfrak{m}_t$ satisfies $\mu_t = j_t \circ \mathfrak{p}_t$ . Finally, it is then natural to introduce + +(G2) (Low-dimensional bPGS) (G1) is true, and $\mu_t = j_t \circ \mathfrak{p}$ for some $\mathfrak{p}$ and injective $j_t$ , + +which is stronger than (G1), gives bPGS $\mathfrak{p}(X)$ , and ensures that $n \geq \dim(\mathfrak{p})$ . (G2) is satisfied if $f_{t}^{*}$ is injective and $\mathfrak{m}_0 = \mathfrak{m}_1$ . (G2) implies $\mu_1 = i \circ \mu_0$ where $i \coloneqq j_1 \circ j_0^{-1}$ ; in words, CATEs are given by $\mu_0$ and an invertible function. See Sec. C.3 for real-world examples and more discussions. + +With (G1') or (G2), overlapping $X$ can be relaxed to overlapping bPGS or PGS plus the following: + +(M2) (Score partition preserving) For any $\pmb{x}, \pmb{x}' \in \mathcal{X}$ , if $\mathfrak{p}_t(\pmb{x}) = \mathfrak{p}_t(\pmb{x}')$ , then $h_t(\pmb{x}) = h_t(\pmb{x}')$ . + +Note that (M2) is only required for the optimal $h$ specified in Proposition 2 or Theorem 1. The intuition is that $\mathfrak{p}_t$ maps each non-overlapping $x$ to an overlapping value, and $h_t$ preserves this property through learning. This is non-trivial because, for a given $t$ , some values of $X$ are unobserved due to limited overlap. Thus, (M2) can be seen as a weak form of OOD generalization: the NNs for $h$ can + +learn the OOD score partition. While unnecessary for us, linear $\mathfrak{p}_t$ and $h_t$ trivially imply (M2) and are often assumed, e.g., in Huang & Chan (2017); Luo et al. (2017); D'Amour & Franks (2021). + +Our first identification, Proposition 2, relies on (G2) and our generative model, without model identifiability (so differentiable $f_{t}$ is not needed). + +Proposition 2 (Identification via recovery of bPGS). Suppose we have DGP (G2) and model (3) with $n = d$ . Assume (M1)-i) and (M3) (PS matching) let $h_0(X) = h_1(X)$ and $k(X) = 0$ . Then, if $\mathbb{E}_{p_\theta}(Y|X,T) = \mathbb{E}(Y|X,T)$ , we have + +1) (Recovery of bPGS) $z_{\lambda ,t} = h_t(\pmb {x}) = \pmb {v}(\mathfrak{p}(\pmb {x}))$ on overlapping $\pmb{x}$ + +where $\pmb{v}:\mathcal{P}\to \mathbb{R}^n$ is an injective function, and $\mathcal{P}\coloneqq \{\mathfrak{p}(\pmb {x})|$ overlapping $\pmb{x}\}$ + +2) (CATE identification) if $\mathfrak{p}(X)$ in (G2) is overlapping, and (M2) is satisfied, then + +$\mu_t(\pmb{x}) = \hat{\mu}_t(\pmb{x}) \coloneqq \mathbb{E}_{p_\lambda(Z|\pmb{x},t)} \mathbb{E}_{p_\pmb{f}}(Y|Z,t) = \pmb{f}_t(\pmb{h}_t(\pmb{x})),$ for any $t \in \{0,1\}$ and $\pmb{x} \in \mathcal{X}$ . + +In essence, i) the true DGP is identified up to an invertible mapping $\pmb{v}$ , such that $\pmb{f}_t = \pmb{j}_t \circ \pmb{v}^{-1}$ and $\pmb{h} = \pmb{v} \circ \mathfrak{p}$ ; and ii) $\mathfrak{p}_t$ is recovered up to $\pmb{v}$ , and $Y(t) \perp X|\mathfrak{p}_t(X)$ is preserved—with same $\pmb{v}$ for both $t$ . Theorem 1 below also achieves the essence i) and ii), under $\mathfrak{p}_0 \neq \mathfrak{p}_1$ . + +The existence of bPGS is preferred, because it satisfies overlap and (M2) more easily than PGS which requires the conditions for each of the two functions of PGS. However, the existence of low-dimensional bPGS is uncertain in practice when our knowledge of the DGP is limited. Thus, we depend on Theorem 1 based on the model identifiability to work under PGS which generally exists. + +Theorem 1 (Identification via recovery of PGS). Suppose we have DGP $(G1^{\prime})$ and model (3) with $n = d$ . For the model, assume $(M1)$ and $(\mathbf{M3}^{\prime})$ (Noise matching) let $p_{\mathrm{e}} = p_{\epsilon}$ and $\pmb{k}(X) = k\pmb{k}'(X), k \to 0$ . Assume further that $(D1)$ and $(\mathbf{D2})$ (Balance from data) $\mathcal{A}_0 = \mathcal{A}_1$ in (5). Then, if $p_{\theta}(\pmb{y}|\pmb{x}, t) = p(\pmb{y}|\pmb{x}, t)$ ; conclusions 1) and 2) in Proposition 2 hold with $\mathfrak{p}$ replaced with $\mathfrak{p}_t$ in $(G1^{\prime})$ ; and the domain of $\pmb{v}$ becomes $\mathcal{P} := \{\mathfrak{p}_t(\pmb{x}) | p(t, \pmb{x}) > 0\}$ . + +Theorem 1 implies that, without bPGS, we need to know or learn the distribution of hidden noise $\epsilon$ to have $p_{\mathrm{e}} = p_{\epsilon}$ . Proposition 2 and Theorem 1 achieve recovery and identification in a complementary manner; the former starts from the prior by $\mathfrak{p}_0 = \mathfrak{p}_1$ and $h_0 = h_1$ , while the latter starts from the decoder by $\mathcal{A}_0 = \mathcal{A}_1$ and $p_{\mathrm{e}} = p_{\epsilon}$ . We see that $\mathcal{A}_0 = \mathcal{A}_1$ acts as a kind of balance because it replaces $\mathfrak{p}_0 = \mathfrak{p}_1$ in Proposition 2. We show in Sec. A a sufficient and necessary condition (D2') on data that ensures $\mathcal{A}_0 = \mathcal{A}_1$ . Note that the singularities due to $k \to 0$ (e.g., $\lambda \to 0$ ) cancel out in (5). See Sec. C.4 for more on the complementarity between the two identifications. + +# 4 ESTIMATION BY $\beta$ -INTACT-VAE + +# 4.1 PRIOR AS BPGS, POSTERIOR AS PGS, AND $\beta$ AS REGULARIZATION STRENGTH + +In Sec. 3.2, we see that the existence of bPGS (Proposition 2) is preferable in identifying the true DGP up to an equivalent expression—while Theorem 1 allows us to deal with PGS by adding other conditions. In learning our model with data, we formally require (G1) and further expect that (G2) holds approximately; the latter is true when $f_{t}^{*}$ is injective and $\mathfrak{m}_0 \approx \mathfrak{m}_1$ ( $\mathfrak{m}_t(X)$ is an approximate bPGS). Instead of the trivial regression $\mu_t(X) = \mathbb{E}(Y|X,T = t)$ , we want to recover the approximate bPGS $\mathfrak{m}_t(X)$ . This idea is common in practice. For example, in a real-world nutrition study (Huang & Chan, 2017), a reduction of 11 covariates recovers a 1-dimensional linear bPGS. + +We consider two ways to recover an approximate bPGS by a VAE. One is to use a prior which does not depend on $t$ , indicating a preference for bPGS. Namely, we set $\lambda_0 = \lambda_1$ , denote $\Lambda(X) \coloneqq \lambda(X)$ and have $p_{\Lambda}(z|x)$ as the prior in (3). The decoder and encoder are factorized Gaussians: + +$$ +p _ {f, g} (\boldsymbol {y} \mid \boldsymbol {z}, t) = \mathcal {N} (\boldsymbol {y}; \boldsymbol {f} _ {t} (\boldsymbol {z}), \operatorname {d i a g} (\boldsymbol {g} _ {t} (\boldsymbol {z}))), q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x}, \boldsymbol {y}, t) = \mathcal {N} (\boldsymbol {z}; \boldsymbol {r} _ {t} (\boldsymbol {x}, \boldsymbol {y}), \operatorname {d i a g} (\boldsymbol {s} _ {t} (\boldsymbol {x}, \boldsymbol {y}))), \tag {6} +$$ + +where $\phi = (r, s)$ . The other is to introduce a hyperparameter $\beta$ in the ELBO as in $\beta$ -VAE (Higgins et al., 2017). The modified ELBO with $\beta$ , up to the additive constant, is derived as: + +$$ +\mathbb {E} _ {\mathcal {D}} \{- \beta D _ {\mathrm {K L}} \left(q _ {\phi} \| p _ {\boldsymbol {\Lambda}}\right) - \mathbb {E} _ {\boldsymbol {z} \sim q _ {\phi}} \left[ \left(\boldsymbol {y} - \boldsymbol {f} _ {t} (\boldsymbol {z})\right) ^ {2} / 2 \boldsymbol {g} _ {t} (\boldsymbol {z}) \right] - \mathbb {E} _ {\boldsymbol {z} \sim q _ {\phi}} \log | \boldsymbol {g} _ {t} (\boldsymbol {z}) | \}. \tag {7} +$$ + +For convenience, here and in $\mathcal{L}_f$ in Sec. 4.2, we omit the summation as if $Y$ is univariate. The encoder $q_{\phi}$ depends on $t$ and can realize a PGS. With $\beta$ , we control the trade-off between the first and second terms: the former is the divergence of the posterior from the balanced prior, and the latter is the reconstruction of the outcome. Note that a larger $\beta$ encourages the conditional + +balance $Z \bot T|X$ on the posterior. By choosing $\beta$ appropriately, e.g., by validation, the ELBO can recover an approximate bPGS while fitting the outcome well. In summary, we base the estimation on Proposition 2 and bPGS as much as possible, but step into Theorem 1 and noise modeling required by $p_{\mathrm{e}} = p_{\epsilon}$ when necessary. + +Note also that the parameters $g$ and $k$ , which model the outcome noise and express the uncertainty of the prior, respectively, are both learned by the ELBO. This deviates from the theoretical conditions described in Sec. 3.2, but it is more practical and yields better results in our experiments. See Sec. C.5 for more ideas and connections behind the ELBO. + +Once the VAE is learned² by the ELBO, the estimate of the expected potential outcomes is given by: + +$$ +\hat {\mu} _ {\hat {t}} (\boldsymbol {x}) = \mathbb {E} _ {q (\boldsymbol {z} | \boldsymbol {x})} \boldsymbol {f} _ {\hat {t}} (\boldsymbol {z}) = \mathbb {E} _ {\mathcal {D} | \boldsymbol {x} \sim p (\boldsymbol {y}, t | \boldsymbol {x})} \mathbb {E} _ {\boldsymbol {z} \sim q _ {\phi}} \boldsymbol {f} _ {\hat {t}} (\boldsymbol {z}), \hat {t} \in \{0, 1 \}, \tag {8} +$$ + +where $q(\boldsymbol{z}|\boldsymbol{x}) \coloneqq \mathbb{E}_{p(\boldsymbol{y},t|\boldsymbol{x})}q_{\phi}(\boldsymbol{z}|\boldsymbol{x},\boldsymbol{y},t)$ is the aggregated posterior. We mainly consider the case where $\boldsymbol{x}$ is observed in the data, and the sample of $(Y,T)$ is taken from the data given $X = \boldsymbol{x}$ . When $\boldsymbol{x}$ is not in the data, we replace $q_{\phi}$ with $p_{\Lambda}$ in (8) (see Sec. C.7 for details and E for results). Note that $\hat{t}$ in (8) indicates a counterfactual assignment that may not be the same as the factual $T = t$ in the data. That is, we set $T = \hat{t}$ in the decoder. The assignment is not applied to the encoder which is learned from factual $X,Y,T$ (see also the explanation of $\epsilon_{CF,t}$ in Sec. 4.2). The overall algorithm steps are i) train the VAE using (7), and ii) infer CATE $\hat{\tau}(\boldsymbol{x}) = \hat{\mu}_1(\boldsymbol{x}) - \hat{\mu}_0(\boldsymbol{x})$ by (8). + +# 4.2 CONDITIONALLY BALANCED REPRESENTATION LEARNING + +We formally justify our ELBO (7) from the BRL viewpoint. We show that the conditional BRL via the KL (first) term of the ELBO results from bounding a CATE error; particularly, the error due to the imprecise recovery of $j_{t}$ in (G1') is controlled by the ELBO. Previous works (Shalit et al., 2017; Lu et al., 2020) instead focus on unconditional balance and bound PEHE which is marginalized on $X$ . Sec. 5.2 experimentally shows the advantage of our bounds and ELBO. Further, we connect the bounds to identification and consider noise modeling through $g_{t}(z)$ . Sec. D.3 for detailed comparisons to previous works. In Sec. E.4, we empirically validate our bounds, and, particularly, the bounds are more useful under weaker overlap. + +We introduce the objective that we bound. Using (8) to estimate CATE, $\hat{\tau}_{\pmb{f}}(\pmb{z}) \coloneqq \pmb{f}_1(\pmb{z}) - \pmb{f}_0(\pmb{z})$ is marginalized on $q(\pmb{z}|\pmb{x})$ . On the other hand, the true CATE, given the covariate $\pmb{x}$ or score $\pmb{z}$ , is: + +$$ +\tau (\boldsymbol {x}) = \boldsymbol {j} _ {1} \left(\mathfrak {p} _ {1} (\boldsymbol {x})\right) - \boldsymbol {j} _ {0} \left(\mathfrak {p} _ {0} (\boldsymbol {x})\right), \quad \tau_ {\boldsymbol {j}} (\boldsymbol {z}) = \boldsymbol {j} _ {1} (\boldsymbol {z}) - \boldsymbol {j} _ {0} (\boldsymbol {z}), \tag {9} +$$ + +where $\mathbf{j}_t$ is associated with an approximate bPGS $\mathfrak{p}_t$ (say, $\mathfrak{m}_t$ ) as the target of recovery by our VAE. Accordingly, given $\mathbf{x}$ , the error of posterior CATE, with or without knowing $\mathfrak{p}_t$ , is defined as + +$$ +\epsilon_ {f} ^ {*} (\boldsymbol {x}) := \mathbb {E} _ {q (z | \boldsymbol {x})} \left(\hat {\tau} _ {\boldsymbol {f}} (\boldsymbol {z}) - \tau (\boldsymbol {x})\right) ^ {2}; \quad \epsilon_ {\boldsymbol {f}} (\boldsymbol {x}) := \mathbb {E} _ {q (z | \boldsymbol {x})} \left(\hat {\tau} _ {\boldsymbol {f}} (\boldsymbol {z}) - \tau_ {\boldsymbol {j}} (\boldsymbol {z})\right) ^ {2}. \tag {10} +$$ + +We bound $\epsilon_{f}$ instead of $\epsilon_{f}^{*}$ because the error between $\tau(X)$ and $\tau_{j}(Z)$ is small—if the score recovery works well, then $z \approx \mathfrak{p}_0(x) \approx \mathfrak{p}_1(x)$ in (9). We consider the error between $\hat{\tau}_{f}$ and $\tau_{j}$ below. We define the risks of outcome regression, into which $\epsilon_{f}$ is decomposed. + +Definition 3 (CATE risks). Let $Y(\hat{t})|\mathfrak{p}_{\hat{t}}(X) \sim p_{Y(\hat{t})|\mathfrak{p}_{\hat{t}}}(\boldsymbol{y}|P)$ and $q_{t}(\boldsymbol{z}|\boldsymbol{x}) \coloneqq q(\boldsymbol{z}|\boldsymbol{x}, t) = \mathbb{E}_{p(\boldsymbol{y}|\boldsymbol{x}, t)}q_{\phi}$ . The potential outcome loss at $(\boldsymbol{z}, t)$ , factual risk, and counterfactual risk are: + +$$ +\begin{array}{l} \mathcal {L} _ {\boldsymbol {f}} (\boldsymbol {z}, \hat {t}) := \mathbb {E} _ {p _ {Y (\hat {t}) | \mathfrak {p} _ {\hat {t}}} (\boldsymbol {y} | P = \boldsymbol {z})} (\boldsymbol {y} - \boldsymbol {f} _ {\hat {t}} (\boldsymbol {z})) ^ {2} / \boldsymbol {g} _ {\hat {t}} (\boldsymbol {z}) = \boldsymbol {g} _ {\hat {t}} (\boldsymbol {z}) ^ {- 1} \int (\boldsymbol {y} - \boldsymbol {f} _ {\hat {t}} (\boldsymbol {z})) ^ {2} p _ {Y (\hat {t}) | \mathfrak {p} _ {\hat {t}}} (\boldsymbol {y} | \boldsymbol {z}) d \boldsymbol {y}; \\ \epsilon_ {F, t} (\boldsymbol {x}) := \mathbb {E} _ {q _ {t} (\boldsymbol {z} | \boldsymbol {x})} \mathcal {L} _ {\boldsymbol {f}} (\boldsymbol {z}, t); \quad \epsilon_ {C F, t} (\boldsymbol {x}) := \mathbb {E} _ {q _ {1 - t} (\boldsymbol {z} | \boldsymbol {x})} \mathcal {L} _ {\boldsymbol {f}} (\boldsymbol {z}, t). \\ \end{array} +$$ + +With $Y(t)$ involved, $\mathcal{L}_f$ is a potential outcome loss on $f$ , weighted by $g$ . The factual and counter-factual counterparts, $\epsilon_{F,t}$ and $\epsilon_{CF,t}$ , are defined accordingly. In $\epsilon_{F,t}$ , unit $\pmb{u} = (\pmb{x},\pmb{y},t)$ is involved in the learning of $q_t(z|x)$ , as well as in $\mathcal{L}_f(z,t)$ since $Y(t) = y$ for the unit. In $\epsilon_{CF,t}$ , however, unit $\pmb{u}' = (\pmb{x},\pmb{y}',1 - t)$ is involved in $q_{1 - t}(z|x)$ , but not in $\mathcal{L}_f(z,t)$ since $Y(t)\neq y' = Y(1 - t)$ . + +Thus, the regression error (second) term in ELBO (7) controls $\epsilon_{F,t}$ via factual data. On the other hand, $\epsilon_{CF,t}$ is not estimable due to the unobservable $Y(1 - T)$ , but is bounded by $\epsilon_{F,t}$ plus $MD(\pmb{x})$ in Theorem 2 below—which, in turn, bounds $\epsilon_{f}$ by decomposing it to $\epsilon_{F,t},\epsilon_{CF,t}$ , and $\mathbf{V}_Y$ . + +Theorem 2 (CATE error bound). Assume $|\mathcal{L}_f(z,t)| \leq M$ and $|g_t(z)| \leq G$ , then: + +$$ +\epsilon_ {\boldsymbol {f}} (\boldsymbol {x}) \leq 2 \left[ G \left(\epsilon_ {F, 0} (\boldsymbol {x}) + \epsilon_ {F, 1} (\boldsymbol {x}) + M D (\boldsymbol {x})\right) - \mathbf {V} _ {Y} (\boldsymbol {x}) \right] \tag {11} +$$ + +where $D(\pmb {x})\coloneqq \sum_{t}\sqrt{D_{\mathrm{KL}}(q_{t}\|q_{1 - t}) / 2},$ and $\mathbf{V}_Y(\pmb {x})\coloneqq \mathbb{E}_{q(\pmb {z}|\pmb {x})}\sum_t\mathbb{E}_{p_{Y(t)}|{\mathfrak{p}}_t}(\pmb {y}|\pmb {z})(\pmb {y} - \pmb {j}_t(\pmb {z}))^2.$ + +$D(\pmb{x})$ measures the imbalance between $q_{t}(\pmb{z}|\pmb{x})$ and is symmetric for $t$ . Correspondingly, the KL term in ELBO (7) is symmetric for $t$ and balances $q_{t}(\pmb{z}|\pmb{x})$ by encouraging $Z \perp T|X$ for the posterior. $\mathbf{V}_Y(\pmb{x})$ reflects the intrinsic variance in the DGP and can not be controlled. Estimating $G, M$ is nontrivial. Instead, we rely on $\beta$ in the ELBO (7) to weight the terms. We do not need two hyperparameters since $G$ is implicitly controlled by the third term, a norm constraint, in ELBO. + +# 5 EXPERIMENTS + +We compare our method with existing methods on three types of datasets. Here, we present two experiments; the remaining one on the Pokec dataset is deferred to Sec. E.3. As in previous works (Shalit et al., 2017; Louizos et al., 2017), we report the absolute error of ATE $\epsilon_{ate} \coloneqq |\mathbb{E}_{\mathcal{D}}(y(1) - y(0)) - \mathbb{E}_{\mathcal{D}}\hat{\tau}(\boldsymbol{x})|$ and, as a surrogate of square CATE error $\epsilon_{cate}(\boldsymbol{x}) = \mathbb{E}_{\mathcal{D}|\boldsymbol{x}}[(y(1) - y(0)) - \hat{\tau}(\boldsymbol{x})]^2$ , the empirical PEHE $\epsilon_{pehe} \coloneqq \mathbb{E}_{\mathcal{D}}\epsilon_{cate}(\boldsymbol{x})$ (Hill, 2011), which is the average square CATE error. + +Unless otherwise indicated, for each function $f, g, h, k, r, s$ in ELBO (7), we use a multilayer perceptron, with $200 * 3$ hidden units (width 200, 3 layers), and ELU activations (Clevert et al., 2015). $\Lambda = (h, k)$ depends only on $X$ . The Adam optimizer with initial learning rate $10^{-4}$ and batch size 100 is employed. All experiments use early-stopping of training by evaluating the ELBO on a validation set. More details on hyper-parameters and settings are given in each experiment. + +# 5.1 SYNTHETIC DATASET + +$$ +W | X \sim \mathcal {N} (\boldsymbol {h} (X), \boldsymbol {k} (X)); T | X \sim \operatorname {B e r n} (\operatorname {L o g i} (\omega l (X))); Y | W, T \sim \mathcal {N} \left(f _ {T} (W), g _ {T} (W)\right). \tag {12} +$$ + +We generate synthetic datasets following (12). Both $X \sim \mathcal{N}(\pmb{\mu}, \pmb{\sigma})$ and $W$ are factorized Gaussians. $\pmb{\mu}, \pmb{\sigma}$ are randomly sampled. The functions $h, k, l$ are linear. Outcome models $f_0, f_1$ are built by NNs with invertible activations. $Y$ is univariate, $\dim(X) = 30$ , and $\dim(W)$ ranges from 1 to 5. $W$ is a bPGS, but the dimensionality is not low enough to satisfy the injectivity in (G2), when $\dim(W) > 1$ . We have 5 different overlap levels controlled by $\omega$ that multiplies the logit value. See Sec. E.1 for details and more results on synthetic datasets. + +With the same $(\dim(W), \omega)$ , we evaluate our method and CFR on 10 random DGPs, with different sets of functions $f, g, h, k, l$ in (12). For each DGP, we sample 1500 data points, and split them into 3 equal sets for training, valida- + +![](images/1fcb48f7ee20934f5737118dd154656b6d126595e1b16a73f52a3272708cd414.jpg) +Figure 2: $\sqrt{\epsilon_{pehe}}$ on synthetic datasets. Each error bar is on 10 random DGPs. + +tion, and testing. We show our results for different hyperparameter $\beta$ . For CFR, we try different balancing parameters and present the best results (see the Appendix for detail). + +In each panel of Figure 2, we adjust one of $\omega$ , $\dim(W)$ , with the other fixed to the lowest. As implied by our theory, our method, with only 1-dimensional $Z$ , performs much better in the left panel (where $\dim(W) = 1$ satisfies (G2)) than in the right panel (when $\dim(W) > 1$ ). Although CFR uses 200-dimensional representation, in the left panel our method performs much better than CFR; moreover, in the right panel CFR is not much better than ours. Further, our + +![](images/a53db5701adc53d719ebfa9dcf86115968b1cd2b7130992e8e3a5b09bdac4b7e.jpg) +Figure 3: Plots of recovered - true latent. Blue: $T = 0$ , Orange: $T = 1$ . + +method is much more robust against different DGPs than CFR (see the error bars). Thus, the results indicate the power of identification and recovery of scores. (see Figure 3 also). + +Under the lowest overlap level $(\omega = 22)$ , large $\beta (= 2.5,3)$ shows the best results, which accords with the intuition and bounds in Sec. 4. When $\dim(W) > 1$ , $f_{t}$ in (12) is non-injecutive and learning + +of PGS is necessary, and thus, larger $\beta$ has a negative effect. In fact, $\beta = 1$ is significantly better than $\beta = 3$ when $\dim(W) > 2$ . We note that our method, with a higher-dimensional $Z$ , outperforms or matches CFR also under $\dim(W) > 1$ (see Appendix Figure 5). Thus, the performance gap under $\dim(W) > 1$ in Figure 2 should be due to the capacity of NNs in $\beta$ -Intact-VAE. In Appendix Figure 7 for ATE error, CFR drops performance w.r.t overlap levels. This is evidence that CFR and its unconditional balance overly focus on PEHE (see Sec. 5.2 for more explicit comparison). + +When $\dim(W) = 1$ , there are no better PSs than $W$ , because $f_{t}$ is invertible and no information can be dropped from $W$ . Thus, our method stably learns $Z$ as an approximate affine transformation of the true $W$ , showing identification. An example is shown in Figure 3, and more plots are in Appendix Figure 9. For comparison, we run CEVAE, which is also based on VAE but without identification; CEVAE shows much lower quality of recovery. As expected, both recovery and estimation are better with the balanced prior $p_{\Lambda}(z|x)$ , and we can see examples of bad recovery using $p_{\lambda}(z|x,t)$ in Appendix Figure 10. + +# 5.2 IHDP BENCHMARK DATASET + +This experiment shows our conditional BRL matches state-of-the-art BRL methods and does not overly focus on PEHE. The IHDP (Hill, 2011) is a widely used benchmark dataset; while it is less known, its covariates are limited-overlapping, and thus it is used in Johansson et al. (2020) which considers limited overlap. The dataset is based on an RCT, but Race is artificially introduced as a confounder by removing all treated babies with nonwhite mothers in the data. Thus, Race is highly limited-overlapping, and other covariates that have high correlation to Race, e.g., Birth weight (Kelly et al., 2009), are also limited-overlapping. See Sec. E.2 for detail and more results. + +There is a linear bPGS (linear combination of the covariates). However, most of the covariates are binary, so the support of the bPGS is often on small and separated intervals. Thus, the Gaussian latent $Z$ in our model is misspecified. We use higher-dimensional $Z$ to address this, similar to Louizos et al. (2017). Specifically, we set $\dim(Z) = 50$ , together with NNs of $50 * 2$ hidden units in the prior and encoder. We set $\beta = 1$ since it works well on synthetic datasets with limited overlap. + +As shown in Table 1, $\beta$ -Intact-VAE outperforms or matches the state-of-the-art methods; it has the best performance measured by both $\epsilon_{ate}$ and $\epsilon_{pehe}$ and matches CF and CFR respectively. Also notably, our method outperforms other generative models (CEVAE and GANITE) by large margins. + +To show our conditional balance is preferable, we also modify our method and add two components for unconditional balance from CFR (see the Appendix), which is based on bounding PEHE and is controlled by another hyperparameter $\gamma$ . In the modified version, the over-focus on PEHE of the unconditional balance is seen clearly—with different $\gamma$ , it significantly affects PEHE, but barely affects ATE error. In fact, the unconditional balance, with larger $\gamma$ , only worsens the performance. See also Appendix Figure 7 where CFR gives larger ATE errors with less overlap. + +Table 1: Errors on IHDP over 1000 random DGPs. "Mod. *" indicates the modified version with unconditional balance of strength $\gamma = *$ . Italic indicates where the modified version is significantly worse than the original. Bold indicates method(s) which is significantly better than others. The results of other methods are taken from Shalit et al. (2017), except for GANITE and CEVAE, the results of which are taken from original works. + +
MethodTMLEBNNCFRCFCEVAEGANITEOursMod. 1Mod. 0.2Mod. 0.1Mod. 0.05Mod. 0.01
εate.30±.01.37±.03.25±.01.18±.01.34±.01.43±.05.180±.007.185±.008.185±.008.186±.009.183±.008.181±.008
√εpehe5.0±.22.2±.1.71±.023.8±.22.7±.11.9±.4.709±.0241.175±.046.797±.030.748±.028.732±.028.719±.027
+ +# 6 CONCLUSION + +We proposed a method for CATE estimation under limited overlap. Our method exploits identifiable VAE, a recent advance in generative models, and is fully motivated and theoretically justified by causal considerations: identification, prognostic score, and balance. Experiments show evidence that the injectivity of $f_{t}$ in our model is possibly unnecessary because $\dim(Z) > \dim(Y)$ yields better results. A theoretical study of this is an interesting future direction. We have evidence that Intact-VAE works under unobserved confounding and believe that VAEs are suitable for principled causal inference owing to their probabilistic nature, if not compromised by ad hoc heuristics (Wu & Fukumizu, 2021). + +# REFERENCES + +Jason Abrevaya, Yu-Chin Hsu, and Robert P Lieli. Estimating conditional average treatment effects. Journal of Business & Economic Statistics, 33(4):485-505, 2015. +Ahmed M Alaa and Mihaela van der Schaar. Bayesian inference of individualized treatment effects using multi-task gaussian processes. In Advances in Neural Information Processing Systems, pp. 3424-3432, 2017. +Timothy B Armstrong and Michal Kolesár. Finite-sample optimal estimation and inference on average treatment effects under unconfoundedness. *Econometrica*, 89(3):1141-1177, 2021. +Stéphane Bonhomme and Martin Weidner. Posterior average effects. Journal of Business & Economic Statistics, (just-accepted):1-38, 2021. +Victor Chernozhukov and Christian Hansen. Quantile models with endogeneity. Annu. Rev. Econ., 5(1):57-81, 2013. +Denis Chetverikov and Daniel Wilhelm. Nonparametric instrumental variable estimation under monotonicity. *Econometrica*, 85(4):1303-1320, 2017. +Denis Chetverikov, Andres Santos, and Azeem M Shaikh. The econometrics of shape restrictions. Annual Review of Economics, 10:31-63, 2018. +Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015. +Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in neural information processing systems, pp. 2292-2300, 2013. +Wangzhi Dai and Collin M Stultz. Quantifying common support between multiple treatment groups using a contrastive-vae. In Machine Learning for Health, pp. 41-52. PMLR, 2020. +Alexander D'Amour and Alexander Franks. Deconfounding scores: Feature representations for causal effect estimation with weak overlap. arXiv preprint arXiv:2104.05762, 2021. +Carl Doersch. Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908, 2016. +Alexander D'Amour, Peng Ding, Avi Feller, Lihua Lei, and Jasjeet Sekhon. Overlap in observational studies with high-dimensional covariates. Journal of Econometrics, 2020. +Max H Farrell. Robust inference on average treatment effects with possibly more covariates than observations. Journal of Econometrics, 189(1):1-23, 2015. +Joachim Freyberger and Joel L Horowitz. Identification and shape restrictions in nonparametric instrumental variables estimation. Journal of Econometrics, 189(1):41-53, 2015. +Li Gan and Qi Li. Efficiency of thin and thick markets. Journal of Econometrics, 192(1):40-54, 2016. +Prem K Gopalan and David M Blei. Efficient discovery of overlapping communities in massive networks. Proceedings of the National Academy of Sciences, 110(36):14534-14539, 2013. +Sander Greenland. The effect of misclassification in the presence of covariates. American journal of epidemiology, 112(4):564-569, 1980. +David Hajage, Yann De Rycke, Guillaume Chauvet, and Florence Tubach. Estimation of conditional and marginal odds ratios using the prognostic score. Statistics in medicine, 36(4):687-716, 2017. +Ben B Hansen. The prognostic analogue of the propensity score. Biometrika, 95(2):481-488, 2008. +Negar Hassanpour and Russell Greiner. Learning disentangled representations for counterfactual regression. In International Conference on Learning Representations, 2019. +Miguel A. Hernan and James M. Robins. Causal Inference: What If. CRC Press, 1st edition, 2020. ISBN 978-1-4200-7616-5. + +Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. In 5th International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Sy2fzU9gl. +Jennifer L Hill. Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1):217-240, 2011. +Han Hong, Michael P Leung, and Jessie Li. Inference on finite-population treatment effects under limited overlap. The Econometrics Journal, 23(1):32-47, 2020. +Ming-Yueh Huang and Kwun Chuen Gary Chan. Joint sufficient dimension reduction and estimation of conditional and average treatment effects. Biometrika, 104(3):583-596, 2017. +Martin Huber and Kaspar Wuthrich. Local average and quantile treatment effects under endogeneity: a review. Journal of Econometric Methods, 8(1), 2018. +Guido W Imbens and Donald B Rubin. Causal inference in statistics, social, and biomedical sciences. Cambridge University Press, 2015. +Dominik Janzing and Bernhard Scholkopf. Causal inference using the algorithmic markov condition. IEEE Transactions on Information Theory, 56(10):5168-5194, 2010. +Andrew Jesson, Soren Mindermann, Uri Shalit, and Yarin Gal. Identifying causal-effect inference failure with uncertainty-aware models. Advances in Neural Information Processing Systems, 33, 2020. +Fredrik Johansson, Uri Shalit, and David Sontag. Learning representations for counterfactual inference. In International conference on machine learning, pp. 3020-3029, 2016. +Fredrik D Johansson, David Sontag, and Rajesh Ranganath. Support and invertibility in domain-invariant representations. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 527-536. PMLR, 2019. +Fredrik D Johansson, Uri Shalit, Nathan Kallus, and David Sontag. Generalization bounds and representation learning for estimation of potential outcomes and causal effects. arXiv preprint arXiv:2001.07426, 2020. +Nathan Kallus, Brenton Pennicooke, and Michele Santacatterina. More robust estimation of sample average treatment effects using kernel optimal matching in an observational study of spine surgical interventions. arXiv preprint arXiv:1811.04274, 2018. +Yvonne Kelly, Lidia Panico, Mel Bartley, Michael Marmot, James Nazroo, and Amanda Sacker. Why does birthweight vary among ethnic groups in the uk? findings from the millennium cohort study. Journal of public health, 31(1):131-137, 2009. +Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. Variational autoencoders and nonlinear ica: A unifying framework. In International Conference on Artificial Intelligence and Statistics, pp. 2207-2217, 2020a. +Ilyes Khemakhem, Ricardo Monti, Diederik Kingma, and Aapo Hyvarinen. Ice-beem: Identifiable conditional energy-based deep models based on nonlinear ica. Advances in Neural Information Processing Systems, 33, 2020b. +Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. URL http://arxiv.org/abs/1312.6114. +Diederik P Kingma, Max Welling, et al. An introduction to variational autoencoders. Foundations and Trends® in Machine Learning, 12(4):307-392, 2019. +Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in neural information processing systems, pp. 3581-3589, 2014. + +Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=SJU4ayYgl. +Manabu Kuroki and Judea Pearl. Measurement bias and effect restoration in causal inference. Biometrika, 101(2):423-437, 2014. +Jure Leskovec and Andrej Krevl. Snap datasets: Stanford large network dataset collection, 2014. +Arthur Lewbel. The identification zoo: Meanings of identification in econometrics. Journal of Economic Literature, 57(4):835-903, 2019. +Fan Li and Fan Li. Propensity score weighting for causal inference with multiple treatments. The Annals of Applied Statistics, 13(4):2389-2415, 2019. +Zheng Li, Guannan Liu, and Qi Li. Nonparametric knn estimation with monotone constraints. Econometric Reviews, 36(6-9):988-1006, 2017. +Christos Louizos, Uri Shalit, Joris M Mooij, David Sontag, Richard Zemel, and Max Welling. Causal effect inference with deep latent-variable models. In Advances in Neural Information Processing Systems, pp. 6446-6456, 2017. +Danni Lu, Chenyang Tao, Junya Chen, Fan Li, Feng Guo, and Lawrence Carin. Reconsidering generative objectives for counterfactual reasoning. Advances in Neural Information Processing Systems, 33, 2020. +Wei Luo, Yeying Zhu, and Debashis Ghosh. On estimating regression-based causal effects using sufficient dimension reduction. Biometrika, 104(1):51-65, 2017. +Emile Mathieu, Tom Rainforth, Nana Siddharth, and Yee Whye Teh. Disentangling disentanglement in variational autoencoders. In International Conference on Machine Learning, pp. 4402-4412. PMLR, 2019. +Wang Miao, Zhi Geng, and Eric J Tchetgen Tchetgen. Identifying causal effects with proxy variables of an unmeasured confounder. Biometrika, 105(4):987-993, 2018. +Michael Oberst, Fredrik Johansson, Dennis Wei, Tian Gao, Gabriel Brat, David Sontag, and Kush Varshney. Characterization of overlap in observational studies. In International Conference on Artificial Intelligence and Statistics, pp. 788-798. PMLR, 2020. +Judea Pearl. Causality: models, reasoning and inference. Cambridge University Press, 2009. +Severi Rissanen and Pekka Marttinen. A critical look at the identifiability of causal effects with deep latent variable models. NeurIPS 2021, to appear, 2021. +Paul R Rosenbaum. Modern algorithms for matching in observational studies. Annual Review of Statistics and Its Application, 7:143-176, 2020. +Paul R Rosenbaum and Donald B Rubin. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41-55, 1983. +Donald B Rubin. Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100(469):322-331, 2005. +Uri Shalit, Fredrik D Johansson, and David Sontag. Estimating individual treatment effect: generalization bounds and algorithms. In International Conference on Machine Learning, pp. 3076-3085. PMLR, 2017. +Claudia Shi, David Blei, and Victor Veitch. Adapting neural networks for the estimation of treatment effects. In Advances in Neural Information Processing Systems, pp. 2507-2517, 2019. +Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In Advances in neural information processing systems, pp. 3483-3491, 2015. + +Peter Sorrenson, Carsten Rother, and Ullrich Köthe. Disentanglement by nonlinear ica with general incompressible-flow networks (gin). In International Conference on Learning Representations, 2019. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014. +Jennifer E Starling, Catherine E Aiken, Jared S Murray, Annette Nakimuli, and James G Scott. Monotone function estimation in the presence of extreme data coarsening: Analysis of preeclampsia and birth weight in urban uganda. arXiv preprint arXiv:1912.06946, 2019. +Elizabeth A. Stuart. Matching Methods for Causal Inference: A Review and a Look Forward. Statistical Science, 25(1):1-21, 2010. doi: 10.1214/09-STS313. URL https://doi.org/10.1214/09-STS313. +Xinwei Sun, Botong Wu, Chang Liu, Xiangyu Zheng, Wei Chen, Tao Qin, and Tie-yan Liu. Latent causal invariant model. arXiv preprint arXiv:2011.02203, 2020. +Alexander Tarr and Kosuke Imai. Estimating average treatment effects with support vector machines. arXiv preprint arXiv:2102.11926, 2021. +Mark J Van der Laan and Sherri Rose. Targeted learning in data science: causal inference for complex longitudinal studies. Springer, 2018. +Victor Veitch, Yixin Wang, and David Blei. Using embeddings to correct for unobserved confounding in networks. In Advances in Neural Information Processing Systems, pp. 13792-13802, 2019. +Stefan Wager and Susan Athey. Estimation and inference of heterogeneous treatment effects using random forests. Journal of the American Statistical Association, 113(523):1228-1242, 2018. +Shanshan Wang, Liren Yang, Li Shang, Wenfang Yang, Cuifang Qi, Liyan Huang, Guilan Xie, Ruiqi Wang, and Mei Chun Chung. Changing trends of birth weight with maternal age: a cross-sectional study in xi'an city of northwestern china. BMC Pregnancy and Childbirth, 20(1):1-8, 2020. +Halbert White and Karim Chalak. Identification and identification failure for treatment effects using structural systems. *Econometric Reviews*, 32(3):273-317, 2013. +Pengzhou Wu and Kenji Fukumizu. Causal mosaic: Cause-effect inference via nonlinear ica and ensemble method. In International Conference on Artificial Intelligence and Statistics, pp. 1157-1167. PMLR, 2020a. URL http://proceedings.mlr.press/v108/wu20b.html. +Pengzhou Wu and Kenji Fukumizu. Towards principled causal effect estimation by deep identifiable models. arXiv preprint arXiv:2109.15062, 2021. +Pengzhou Abel Wu and Kenji Fukumizu. Identifying treatment effects under unobserved confounding by causal representation learning. submitted to ICLR 2021, 2020b. URL https://openreview.net/forum?id=D3TNqCspFpM. +S Yang and P Ding. Asymptotic inference of causal effects with observational studies trimmed by the estimated propensity scores. Biometrika, 105(2):487-493, 03 2018. ISSN 0006-3444. doi: 10.1093/biomet/asy008. URL https://doi.org/10.1093/biomet/asy008. +Liuyi Yao, Sheng Li, Yaliang Li, Mengdi Huai, Jing Gao, and Aidong Zhang. Representation learning for treatment effect estimation from observational data. In Advances in Neural Information Processing Systems, pp. 2633-2643, 2018. +Jinsung Yoon, James Jordon, and Mihaela van der Schaar. GANITE: Estimation of individualized treatment effects using generative adversarial nets. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=ByKWUeWA-. +Weijia Zhang, Lin Liu, and Jiuyong Li. Treatment effect estimation with disentangled latent factors. arXiv preprint arXiv:2001.10652, 2020a. +Yao Zhang, Alexis Bellot, and Mihaela Schaar. Learning overlapping representations for the estimation of individualized treatment effects. In International Conference on Artificial Intelligence and Statistics, pp. 1005-1014. PMLR, 2020b. + +# A PROOFS + +We restate our model identifiability formally. + +Lemma 1 (Model identifiability). Given model (3) under $(\mathbf{M}1)$ , for $T = t$ , assume + +(D1') (Non-degenerated data for $\lambda$ ) there exist $2n + 1$ points $x_0, \ldots, x_{2n} \in \mathcal{X}$ such that the $2n$ -square matrix $L_t := [\gamma_{t,1}, \dots, \gamma_{t,2n}]$ is invertible, where $\gamma_{t,k} := \pmb{\lambda}_t(\pmb{x}_k) - \pmb{\lambda}_t(\pmb{x}_0)$ . + +Then, given $T = t$ , the family is identifiable up to an equivalence class. That is, if $p_{\pmb{\theta}}(\pmb{y}|\pmb{x},t) = p_{\pmb{\theta}'}(\pmb{y}|\pmb{x},t)$ , we have the relation between parameters: for any $\pmb{y}_t$ in the image of $\pmb{f}_t$ , + +$$ +\boldsymbol {f} _ {t} ^ {- 1} \left(\boldsymbol {y} _ {t}\right) = \operatorname {d i a g} (\boldsymbol {a}) \boldsymbol {f} _ {t} ^ {\prime - 1} \left(\boldsymbol {y} _ {t}\right) + \boldsymbol {b} =: \mathcal {A} _ {t} \left(\boldsymbol {f} _ {t} ^ {\prime - 1} \left(\boldsymbol {y} _ {t}\right)\right) \tag {13} +$$ + +where $\mathrm{diag}(\mathbf{a})$ is an invertible $n$ -diagonal matrix and $\mathbf{b}$ is a $n$ -vector, both depend on $\lambda_t$ and $\lambda_t'$ . + +Note, (D1) in the main text implies (D1'), see Sec. B.2.3 in Khemakhem et al. (2020a). The main part of our model identifiability is essentially the same as that of Theorem 1 in Khemakhem et al. (2020a), but now adapted to include the dependency on $t$ . Here we give an outline of the proof, and the details can be easily filled by referring to Khemakhem et al. (2020a). In the proof, subscripts $t$ are omitted for convenience. + +Proof of Lemma 1. Using (M1) i) and ii), we transform $p_{f,\lambda}(\boldsymbol{y}|\boldsymbol{x},t) = p_{\boldsymbol{f}',\lambda'}(\boldsymbol{y}|\boldsymbol{x},t)$ into equality of noiseless distributions, that is, + +$$ +q _ {\boldsymbol {f} ^ {\prime}, \boldsymbol {\lambda} ^ {\prime}} (\boldsymbol {y}) = q _ {\boldsymbol {f}, \boldsymbol {\lambda}} (\boldsymbol {y}) := p _ {\boldsymbol {\lambda}} \left(\boldsymbol {f} ^ {- 1} (\boldsymbol {y}) | \boldsymbol {x}, t\right) v o l \left(\boldsymbol {J} _ {\boldsymbol {f} ^ {- 1}} (\boldsymbol {y})\right) \mathbb {I} _ {\mathcal {Y}} (\boldsymbol {y}) \tag {14} +$$ + +where $p_{\lambda}$ is the Gaussian density function of the conditional prior defined in (3) and $vol(A) \coloneqq \sqrt{\operatorname*{det}AA^T}$ . $q_{f',\lambda'}$ is defined similarly to $q_{f,\lambda}$ . + +Then, apply model (3) to (14), plug the $2n + 1$ points from $(\mathbf{D1^{\prime}})$ into it, and re-arrange the resulting $2n + 1$ equations in matrix form, we have + +$$ +\mathcal {F} ^ {\prime} (Y) = \mathcal {F} (Y) := \boldsymbol {L} ^ {T} \boldsymbol {t} (\boldsymbol {f} ^ {- 1} (Y)) - \boldsymbol {\beta} \tag {15} +$$ + +where $\pmb{t}(Z) \coloneqq (Z, Z^2)^T$ is the sufficient statistics of factorized Gaussian, and $\beta_t \coloneqq (\alpha_t(\pmb{x}_1) - \alpha_t(\pmb{x}_0), \dots, \alpha_t(\pmb{x}_{2n}) - \alpha_t(\pmb{x}_0))^T$ where $\alpha_t(X; \pmb{\lambda}_t)$ is the log-partition function of the conditional prior in (3). $\mathcal{F}'$ is defined similarly to $\mathcal{F}$ , but with $\pmb{f}', \pmb{\lambda}', \alpha'$ + +Since $L$ is invertible, we have + +$$ +\boldsymbol {t} \left(\boldsymbol {f} ^ {- 1} (Y)\right) = \boldsymbol {A} \boldsymbol {t} \left(\boldsymbol {f} ^ {\prime - 1} (Y)\right) + \boldsymbol {c} \tag {16} +$$ + +where $\pmb {A} = \pmb{L}^{-T}\pmb{L}^{\prime T}$ and $c = L^{-T}(\beta -\beta^{\prime})$ + +The final part of the proof is to show, by following the same reasoning as in Appendix B of Sorrenson et al. (2019), that $\mathbf{A}$ is a sparse matrix such that + +$$ +\boldsymbol {A} = \left( \begin{array}{c c} \operatorname {d i a g} (\boldsymbol {a}) & \boldsymbol {O} \\ \operatorname {d i a g} (\boldsymbol {u}) & \operatorname {d i a g} \left(\boldsymbol {a} ^ {2}\right) \end{array} \right) \tag {17} +$$ + +where $A$ is partitioned into four $n$ -square matrices. Thus + +$$ +\boldsymbol {f} ^ {- 1} (Y) = \operatorname {d i a g} (\boldsymbol {a}) \boldsymbol {f} ^ {\prime - 1} (Y) + \boldsymbol {b} \tag {18} +$$ + +where $\pmb{b}$ is the first half of $\pmb{c}$ . + +Proof of Proposition 2. Under (G2), and (M3), we have + +$$ +\mathbb {E} _ {p _ {\theta}} (Y | X, T) = \mathbb {E} (Y | X, T) \Rightarrow \boldsymbol {f} _ {t} \circ \boldsymbol {h} (\boldsymbol {x}) = \boldsymbol {j} _ {t} \circ \mathfrak {p} (\boldsymbol {x}) \text {o n} (\boldsymbol {x}, t) \text {s u c h t h a t} p (t, \boldsymbol {x}) > 0. \tag {19} +$$ + +We show the solution set of (19) on overlapping $x$ is + +$$ +\{(f, h) | f _ {t} = j _ {t} \circ \Delta^ {- 1}, h = \Delta \circ \mathfrak {p}, \Delta : \mathcal {P} \rightarrow \mathbb {R} ^ {n} \text {i s i n j e c t i v e} \}. \tag {20} +$$ + +By (G2)(M1), and with injective $f_{t}, j_{t}$ and $\dim(Z) = \dim(Y) \geq \dim(\mathfrak{p})$ , for any $\Delta$ above, there exists a functional parameter $f_{t}$ such that $j_{t} = f_{t} \circ \Delta$ . Thus, set (20) is non-empty, and any element is indeed a solution because $f_{t} \circ h = j_{t} \circ \Delta^{-1} \circ \Delta \circ \mathfrak{p} = j_{t} \circ \mathfrak{p}$ . + +Any solution of (19) should be in (20). A solution should satisfy $h(\pmb{x}) = \pmb{f}_t^{-1} \circ \pmb{j}_t \circ \mathfrak{p}(\pmb{x})$ for both $t$ since $\pmb{x}$ is overlapping. This means the injective function $\pmb{f}_t^{-1} \circ \pmb{j}_t$ should not depend on $t$ , thus it is one of the $\Delta$ in (20). + +We proved conclusion 1) with $\pmb{v} \coloneqq \Delta$ . And, on overlapping $\pmb{x}$ , conclusion 2) is quickly seen from + +$$ +\hat {\mu} _ {t} (\boldsymbol {x}) = \boldsymbol {f} _ {t} (\boldsymbol {h} (\boldsymbol {x})) = \boldsymbol {j} _ {t} \circ \boldsymbol {v} ^ {- 1} (\boldsymbol {v} \circ \mathfrak {p} (\boldsymbol {x})) = \boldsymbol {j} _ {t} (\mathfrak {p} (\boldsymbol {x})) = \mu_ {t} (\boldsymbol {x}). \tag {21} +$$ + +We rely on overlapping $\mathfrak{p}$ to work for non-overlapping $\mathbf{x}$ . For any $\mathbf{x}_t$ with $p(1 - t|\mathbf{x}_t) = 0$ , to ensure $p(1 - t|\mathfrak{p}(\mathbf{x}_t)) > 0$ , there should exist $\mathbf{x}_{1 - t}$ such that $\mathfrak{p}(\mathbf{x}_{1 - t}) = \mathfrak{p}(\mathbf{x}_t)$ and $p(1 - t|\mathbf{x}_{1 - t}) > 0$ . And we also have $\mathbf{h}(\mathbf{x}_{1 - t}) = \mathbf{h}(\mathbf{x}_t)$ due to (M2). Then, we have + +$$ +\hat {\mu} _ {1 - t} \left(\boldsymbol {x} _ {t}\right) = \boldsymbol {f} _ {1 - t} \left(\boldsymbol {h} \left(\boldsymbol {x} _ {t}\right)\right) = \boldsymbol {f} _ {1 - t} \left(\boldsymbol {h} \left(\boldsymbol {x} _ {1 - t}\right)\right) = \boldsymbol {j} _ {1 - t} \left(\mathfrak {p} \left(\boldsymbol {x} _ {1 - t}\right)\right) = \boldsymbol {j} _ {1 - t} \left(\mathfrak {p} \left(\boldsymbol {x} _ {t}\right)\right) = \mu_ {1 - t} \left(\boldsymbol {x} _ {t}\right). \tag {22} +$$ + +The third equality uses (19) on $(x_{1 - t}, 1 - t)$ . + +Below we prove Theorem 1 with (D2) replaced by + +(D2') (Spontaneous balance) there exist $2n + 1$ points $\pmb{x}_0, \dots, \pmb{x}_{2n} \in \mathcal{X}$ , $2n$ -square matrix $C$ and $2n$ -vector $d$ , such that $L_0^{-1}L_1 = C$ and $\beta_0 - C^{-T}\beta_1 = d / k$ for optimal $\lambda_t$ (see below), where $L_t$ is defined in $(D1')$ , $\beta_t := (\alpha_t(\pmb{x}_1) - \alpha_t(\pmb{x}_0), \dots, \alpha_t(\pmb{x}_{2n}) - \alpha_t(\pmb{x}_0))^T$ , and $\alpha_t(X; \lambda_t)$ is the log-partition function of the prior in (3). + +$(\mathbf{D2^{\prime}})$ restricts the discrepancy between $\lambda_0,\lambda_1$ on $2n + 1$ values of $X$ , thus is relatively easy to satisfy with high-dimensional $X$ . $(\mathbf{D2^{\prime}})$ is general despite (or thanks to) the involved formulation. Let us see its generality even under a highly special case: $C = cI$ and $d = 0$ . Then, $L_0^{-1}L_1 = cI$ requires that, $h_1(x_k) - ch_0(x_k)$ is the same for $2n + 1$ points $x_{k}$ . This is easily satisfied except for $n\gg m$ where $m$ is the dimension of $X$ , which rarely happens in practice. And, $\beta_0 - C^{-T}\beta_1 = d$ becomes just $\beta_{1} = c\beta_{0}$ . This is equivalent to $\alpha_{1}(x_{k}) - c\alpha_{0}(x_{k})$ same for $2n + 1$ points, again fine in practice. However, the high generality comes with price. Verifying $(\mathbf{D2^{\prime}})$ using data is challenging, particularly with high-dimensional covariate and latent variable. Although we believe fast algorithms for this purpose could be developed, the effort would be nontrivial. This is another motivation to use the extreme case $\lambda_0 = \lambda_1$ in Sec. 4.1, which corresponds to $C = I$ and $d = 0$ . + +Proof of Theorem 1. By (M1) and (G1'), for any injective function $\Delta: \mathcal{P} \to \mathbb{R}^n$ , there exists a functional parameter $f_t^*$ such that $j_t = f_t^* \circ \Delta$ . Let $h_t^* = \Delta \circ \mathfrak{p}_t$ , then, clearly from (M3'), such parameters $\theta^* = (f^*, h^*)$ are optimal: $p_{\theta^*}(y|x,t) = p(y|x,t)$ . + +Since have all assumptions for Lemma 1, we have + +$$ +\Delta \circ \boldsymbol {j} ^ {- 1} (\boldsymbol {y}) = \boldsymbol {f} ^ {* - 1} (\boldsymbol {y}) = \mathcal {A} \circ \boldsymbol {f} ^ {- 1} (\boldsymbol {y}) | _ {t}, \text {o n} (\boldsymbol {y}, t) \in \left\{\left(\boldsymbol {j} _ {t} \circ \mathfrak {p} _ {t} (\boldsymbol {x}), t\right) | p (t, \boldsymbol {x}) > 0 \right\}, \tag {23} +$$ + +where $\pmb{f}$ is any optimal parameter, and $|_{t}|$ collects all subscripts $t$ . Note, except for $\Delta$ , all the symbols should have subscript $t$ . + +Nevertheless, using (D2'), we can further prove $\mathcal{A}_0 = \mathcal{A}_1$ . + +We repeat the core quantities from Lemma 1 here: $\pmb{A}_t = \pmb{L}_t^{-T}\pmb{L}_t^{\prime T}$ and $\pmb{c}_t = \pmb{L}_t^{-T}(\pmb{\beta}_t - \pmb{\beta}_t')$ . + +From (D2'), we immediately have + +$$ +\boldsymbol {L} _ {0} ^ {- 1} \boldsymbol {L} _ {1} = \boldsymbol {L} _ {0} ^ {\prime - 1} \boldsymbol {L} _ {1} ^ {\prime} = \boldsymbol {C} \Longleftrightarrow \boldsymbol {A} _ {0} = \boldsymbol {A} _ {1} \tag {24} +$$ + +And also, + +$$ +\boldsymbol {L} _ {0} ^ {- 1} \boldsymbol {L} _ {1} = \boldsymbol {C} \iff \boldsymbol {L} _ {0} ^ {- T} \boldsymbol {C} ^ {- T} = \boldsymbol {L} _ {1} ^ {- T} +$$ + +$$ +\boldsymbol {\beta} _ {0} - \boldsymbol {C} ^ {- T} \boldsymbol {\beta} _ {1} = \boldsymbol {\beta} _ {0} ^ {\prime} - \boldsymbol {C} ^ {- T} \boldsymbol {\beta} _ {1} ^ {\prime} = \boldsymbol {d} / k \iff \boldsymbol {C} ^ {T} \left(\boldsymbol {\beta} _ {0} - \boldsymbol {\beta} _ {0} ^ {\prime}\right) = \boldsymbol {\beta} _ {1} - \boldsymbol {\beta} _ {1} ^ {\prime} \tag {25} +$$ + +Multiply right hand sides of the two lines, we have $c_0 = c_1$ . Now we have $\mathcal{A}_0 = \mathcal{A}_1 \coloneqq \mathcal{A}$ . Apply this to (23), we have + +$$ +\boldsymbol {f} _ {t} = \boldsymbol {j} _ {t} \circ \boldsymbol {v} ^ {- 1}, \quad \boldsymbol {v} := \mathcal {A} ^ {- 1} \circ \Delta \tag {26} +$$ + +for any optimal parameters $\theta = (f, h)$ . Again, from (M3'), we have + +$$ +p _ {\boldsymbol {\theta}} (\boldsymbol {y} | \boldsymbol {x}, t) = p (\boldsymbol {y} | \boldsymbol {x}, t) \Longrightarrow p _ {\epsilon} (\boldsymbol {y} - \boldsymbol {f} _ {t} (\boldsymbol {h} _ {t} (\boldsymbol {x}))) = p _ {\mathbf {e}} (\boldsymbol {y} - \boldsymbol {j} _ {t} (\mathfrak {p} _ {t} (\boldsymbol {x}))) \tag {27} +$$ + +where $p_{\epsilon} = p_{\mathbf{e}}$ . And the above is only possible when $\pmb{f}_t \circ \pmb{h}_t = \pmb{j}_t \circ \mathfrak{p}_t$ . Combined with $\pmb{f}_t = \pmb{j}_t \circ \pmb{v}^{-1}$ , we have conclusion 1). + +And conclusion 2) follows from the same reasoning as Proposition 2, applied to both $\mathfrak{p}_0$ and $\mathfrak{p}_1$ . + +Note, when multiplying the two lines of (25), the effects of $k \to 0$ cancel out, and $c_{t}$ is finite and well-defined. Also, it is apparent from above proof that $(\mathbf{D2^{\prime}})$ is a necessary and sufficient condition for $\mathcal{A}_0 = \mathcal{A}_1$ , if other conditions of Theorem 1 are given. + +Below, we prove the results in Sec. 4.2. The definitions and results work for the prior; simply replace $q_{t}(\pmb{x}|\pmb{x})$ with $p_{t}(\pmb{z}|\pmb{x}) \coloneqq p_{\lambda}(\pmb{z}|\pmb{x}, t)$ in definitions and statements, and the proofs below hold as the same. The dependence on $\pmb{f}$ prevail, and the superscripts are omitted. The arguments $\pmb{x}$ are sometimes also omitted. + +Lemma 2 (Counterfactual risk bound). Assume $|\mathcal{L}_{\pmb{f}}(\pmb {z},t)|\leq M$ , we have + +$$ +\epsilon_ {C F} (\boldsymbol {x}) \leq \sum_ {t} q (1 - t | \boldsymbol {x}) \epsilon_ {F, t} (\boldsymbol {x}) + M D (\boldsymbol {x}) \tag {28} +$$ + +where $\epsilon_{CF}(\pmb {x})\coloneqq \sum_t p(1 - t|\pmb {x})\epsilon_{CF,t}(\pmb {x}),$ and $\pmb {D}(\pmb {x})\coloneqq \sum_t\sqrt{D_{\mathrm{KL}}(q_t\|q_{1 - t}) / 2}$ + +Proof of Lemma 2. + +$$ +\begin{array}{l} \epsilon_ {C F} - \sum_ {t} p (1 - t | \boldsymbol {x}) \epsilon_ {F, t} \\ = p (0 | \boldsymbol {x}) \left(\epsilon_ {C F, 1} - \epsilon_ {F, 1}\right) + p (1 | \boldsymbol {x}) \left(\epsilon_ {C F, 0} - \epsilon_ {F, 0}\right) \\ = p (0 | \boldsymbol {x}) \int \mathcal {L} _ {\boldsymbol {f}} (\boldsymbol {z}, 1) \left(q _ {0} (\boldsymbol {z} | \boldsymbol {x}) - q _ {1} (\boldsymbol {z} | \boldsymbol {x})\right) d \boldsymbol {z} + p (1 | \boldsymbol {x}) \int \mathcal {L} _ {\boldsymbol {f}} (\boldsymbol {z}, 0) \left(q _ {1} (\boldsymbol {z} | \boldsymbol {x}) - q _ {0} (\boldsymbol {z} | \boldsymbol {x})\right) d \boldsymbol {z} \\ \leq 2 M \mathbb {T} \mathbb {V} \left(q _ {1}, q _ {0}\right) \leq M D. \\ \end{array} +$$ + +![](images/6e829626b59229fe0c92c7c4e8e821b82475f710ef83a7b65b14ab37aeef039d.jpg) + +$\mathbb{T}\mathbb{V}(p,q)\coloneqq \frac{1}{2}\mathbb{E}|p(z) - q(z)|$ is the total variance distance between probability density $p,q$ . The last inequality uses Pinsker's inequality $\mathbb{T}\mathbb{V}(p,q)\leq \sqrt{D_{\mathrm{KL}}(p\|q) / 2}$ twice, to get the symmetric $\pmb{D}$ . + +Theorem 2 is a direct corollary of Lemma 2 and the following. + +Lemma 3. Define $\epsilon_F = \sum_t p(t|\pmb {x})\epsilon_{F,t}$ . We have + +$$ +\epsilon_ {f} \leq 2 \left(G \left(\epsilon_ {F} + \epsilon_ {C F}\right) - \mathbf {V} _ {Y}\right). \tag {29} +$$ + +Simply bound $\epsilon_{CF}$ in (29) by Lemma 2, we have Theorem 2. To prove Lemma 3, we first examine a bias-variance decomposition of $\epsilon_F$ and $\epsilon_{CF}$ . + +$$ +\begin{array}{l} \epsilon_ {C F, t} = \mathbb {E} _ {q _ {1 - t} (\boldsymbol {z} | \boldsymbol {x})} \boldsymbol {g} _ {t} (\boldsymbol {z}) \mathbb {E} _ {p _ {Y (t) | \mathfrak {p} _ {t}} (\boldsymbol {y} | \boldsymbol {z})} (\boldsymbol {y} - \boldsymbol {f} _ {t} (\boldsymbol {z})) ^ {2} \\ \geq G \mathbb {E} _ {q _ {1 - t} (\boldsymbol {z} | \boldsymbol {x})} \mathbb {E} _ {p _ {Y (t) | \mathfrak {p} _ {t}} (\boldsymbol {y} | \boldsymbol {z})} (\boldsymbol {y} - \boldsymbol {f} _ {t} (\boldsymbol {z})) ^ {2} \tag {30} \\ = G \mathbb {E} _ {q _ {1 - t} (\boldsymbol {z} | \boldsymbol {x})} \mathbb {E} _ {p _ {Y (t) | \mathfrak {p} _ {t}} (\boldsymbol {y} | \boldsymbol {z})} ((\boldsymbol {y} - \boldsymbol {j} _ {t} (\boldsymbol {z})) ^ {2} + (\boldsymbol {j} _ {t} (\boldsymbol {z}) - \boldsymbol {f} _ {t} (\boldsymbol {z})) ^ {2}) \\ \end{array} +$$ + +The second line uses $|\pmb{g}_t(\pmb{z})| \leq G$ , and the third line is a bias-variance decomposition. Now we can define $\mathbf{V}_{CF,t}(\pmb{x}) \coloneqq \mathbb{E}_{q_{1-t}(\pmb{z}|\pmb{x})} \mathbb{E}_{p_{Y(t)}|_{\mathfrak{p}_t}(\pmb{y}|\pmb{z})} (\pmb{y} - \pmb{j}_t(\pmb{z}))^2$ and $\mathbb{B}_{CF,t}(\pmb{x}) \coloneqq \mathbb{E}_{q_{1-t}(\pmb{z}|\pmb{x})} (\pmb{j}_t(\pmb{z}) - \pmb{f}_t(\pmb{z}))^2$ , and we have + +$$ +\epsilon_ {C F, t} \geq G \left(\mathbf {V} _ {C F, t} (\boldsymbol {x}) + \mathbb {B} _ {C F, t} (\boldsymbol {x})\right) \Longrightarrow \epsilon_ {C F} \geq G \left(\mathbf {V} _ {C F} (\boldsymbol {x}) + \mathbb {B} _ {C F} (\boldsymbol {x})\right) \tag {31} +$$ + +where $\mathbf{V}_{CF} := \sum_{t} p(1 - t|\pmb{x})\mathbf{V}_{CF,t} = \sum_{t}\mathbb{E}_{q(\pmb{z},1 - t|\pmb{x})}\mathbb{E}_{p_{Y(t)|\mathfrak{p}_t}(\pmb{y}|\pmb{z})}(\pmb{y} - \pmb{j}_t(\pmb{z}))^2$ and similarly $\mathbb{B}_{CF} = \sum_{t}\mathbb{E}_{q(\pmb{z},1 - t|\pmb{x})}(\pmb{j}_t(\pmb{z}) - \pmb{f}_t(\pmb{z}))^2$ . Repeat the above derivation for $\epsilon_F$ , we have + +$$ +\epsilon_ {F} \geq G \left(\mathbf {V} _ {F} (\boldsymbol {x}) + \mathbb {B} _ {F} (\boldsymbol {x})\right) \tag {32} +$$ + +where $\mathbf{V}_F = \sum_t\mathbb{E}_{q(z,t|\pmb {x})}\mathbb{E}_{p_{Y(t)}|\mathfrak{p}_t}(\pmb {y}|\pmb {z})(\pmb {y} - \pmb {j}_t(\pmb {z}))^2$ and $\mathbb{B}_F = \sum_t\mathbb{E}_{q(\pmb {z},t|\pmb {x})}(\pmb {j}_t(\pmb {z}) - \pmb {f}_t(\pmb {z}))^2$ . Now, we are ready to prove Lemma 3. + +Proof of Lemma 3. + +$$ +\begin{array}{l} \epsilon_ {\boldsymbol {f}} = \mathbb {E} _ {q (\boldsymbol {z} | \boldsymbol {x})} \left(\left(\boldsymbol {f} _ {1} - \boldsymbol {f} _ {0}\right) - \left(\boldsymbol {j} _ {1} - \boldsymbol {j} _ {0}\right)\right) ^ {2} \\ = \mathbb {E} _ {q} ((\boldsymbol {f} _ {1} - \boldsymbol {j} _ {1}) + (\boldsymbol {j} _ {0} - \boldsymbol {f} _ {0})) ^ {2} \\ \leq 2 \mathbb {E} _ {q} ((\boldsymbol {f} _ {1} - \boldsymbol {j} _ {1}) ^ {2} + (\boldsymbol {j} _ {0} - \boldsymbol {f} _ {0}) ^ {2}) \\ = 2 \int [ (\boldsymbol {f} _ {1} - \boldsymbol {j} _ {1}) ^ {2} q (\boldsymbol {z}, 1 | \boldsymbol {x}) + (\boldsymbol {j} _ {0} - \boldsymbol {f} _ {0}) ^ {2} q (\boldsymbol {z}, 0 | \boldsymbol {x}) + \\ \left(\boldsymbol {f} _ {1} - \boldsymbol {j} _ {1}\right) ^ {2} q (\boldsymbol {z}, 0 | \boldsymbol {x}) + \left(\boldsymbol {j} _ {0} - \boldsymbol {f} _ {0}\right) ^ {2} q (\boldsymbol {z}, 1 | \boldsymbol {x}) ] d z \\ = 2 \left(\mathbb {B} _ {F} + \mathbb {B} _ {C F}\right) \leq 2 \left(G \left(\epsilon_ {F} + \epsilon_ {C F}\right) - \mathbf {V} _ {Y}\right) \\ \end{array} +$$ + +![](images/da310d8e381edf0b40a82860c5a2486c4cfba21cd2398cf724dd67d41d95412b.jpg) + +The first inequality uses $(a + b)^2 \leq 2(a^2 + b^2)$ . The next equality splits $q(\boldsymbol{z}|\boldsymbol{x})$ into $q(\boldsymbol{z}, 0|\boldsymbol{x})$ and $q(\boldsymbol{z}, 1|\boldsymbol{x})$ and rearranges to get $\mathbb{B}_F$ and $\mathbb{B}_{CF}$ . The last inequality uses the two bias-variance decompositions, and $\mathbf{V}_Y = \mathbf{V}_F + \mathbf{V}_{CF}$ . + +# B ADDITIONAL BACKGROUNDSD + +# B.1 PROGNOSTIC SCORE AND BALANCING SCORE + +In the fundamental work of (Hansen, 2008), prognostic score is defined equivalently to our $\mathfrak{p}_0$ (P0-score), but it in addition requires no effect modification to work for $Y(1)$ . Thus, a useful prognostic score corresponds to our PGS. We give main properties of PGS as following. + +Proposition 3. If $V$ gives exchangeability, and $\mathfrak{p}_t(V)$ is a PGS, then $Y(t) \perp V, T|\mathfrak{p}_t$ . + +The following three properties of conditional independence will be used repeatedly in proofs. + +Proposition 4 (Properties of conditional independence). (Pearl, 2009, Sec. 1.1.55) For random variables $W, X, Y, Z$ . We have: + +$$ +\begin{array}{l} X \bot Y | Z \wedge X \bot W | Y, Z \Rightarrow X \bot W, Y | Z (\text {C o n t r a c t i o n}). \\ X \bot W, Y | Z \Rightarrow X \bot Y | W, Z (\text {W e a k u n i o n}). \\ X \bot W, Y | Z \Rightarrow X \bot Y | Z (D e c o m p o s i t i o n). \\ \end{array} +$$ + +Proof of Proposition 3. From $Y(t) \bot T|V$ (exchangeability of $V$ ), and since $\mathfrak{p}_t$ is a function of $V$ , we have $Y(t) \bot T|\mathfrak{p}_t, V(1)$ . + +From (1) and $Y(t) \perp V | \mathfrak{p}_t(V)$ (definition of Pt-score), using contraction rule, we have $Y(t) \perp T, V | \mathfrak{p}_t$ for both $t$ . + +Prognostic scores are closely related to the important concept of balancing score (Rosenbaum & Rubin, 1983). Note particularly, the proposition implies $Y(t) \bot T|\mathfrak{p}_t$ (using decomposition rule). Thus, if $\mathfrak{p}(V)$ is a P-score, then $\mathfrak{p}$ also gives weak ignorability (exchangeability and overlap), which is a nice property shared with balancing score, as we will see immediately. + +Definition 4 (Balancing score). $b(V)$ , a function of random variable $V$ , is a balancing score if $T \perp V | b(V)$ . + +Proposition 5. Let $\mathbf{b}(V)$ be a function of random variable $V$ . $\mathbf{b}(V)$ is a balancing score if and only if $f(\mathbf{b}(V)) = p(T = 1|V) \coloneqq e(V)$ for some function $f$ (or more formally, $e(V)$ is $\mathbf{b}(V)$ -measurable). Assume further that $V$ gives weak ignorability, then so does $\mathbf{b}(V)$ . + +Obviously, the propensity score $e(V) \coloneqq p(T = 1|V)$ , the propensity of assigning the treatment given $V$ , is a balancing score (with $f$ be the identity function). Also, given any invertible function $v$ , the composition $v \circ b$ is also a balancing score since $f \circ v^{-1}(v \circ b(V)) = f(b(V)) = e(V)$ . + +Compare the definition of balancing score and prognostic score, we can say balancing score is sufficient for the treatment $T$ ( $T \perp V | b(V)$ ), while prognostic score (Pt-score) is sufficient for the potential outcomes $Y(t)$ ( $Y(t) \perp V | \mathfrak{p}_t(V)$ ). They complement each other; conditioning on either deconfounds the potential outcomes from treatment, with the former focuses on the treatment side, the latter on the outcomes side. + +# B.2 VAE, CONDITIONAL VAE, AND IVAE + +VAEs (Kingma et al., 2019) are a class of latent variable models with latent variable $Z$ , and observable $Y$ is generated by the decoder $p_{\theta}(\pmb{y}|\pmb{z})$ . In the standard formulation (Kingma & Welling, 2013), the variational lower bound $\mathcal{L}(\pmb{y};\pmb{\theta},\phi)$ of the log-likelihood is derived as: + +$$ +\begin{array}{l} \log p (\boldsymbol {y}) \geq \log p (\boldsymbol {y}) - D _ {\mathrm {K L}} (q (\boldsymbol {z} | \boldsymbol {y}) \| p (\boldsymbol {z} | \boldsymbol {y})) \\ \begin{array}{l} \log p (\boldsymbol {y}) = \log p (\boldsymbol {y}) - D _ {\mathrm {K L}} (q (\boldsymbol {z} | \boldsymbol {y}) \| p (\boldsymbol {z} | \boldsymbol {y})) \\ = \mathbb {E} _ {\boldsymbol {z} \sim q} \log p _ {\boldsymbol {\theta}} (\boldsymbol {y} | \boldsymbol {z}) - D _ {\mathrm {K L}} \left(q _ {\phi} (\boldsymbol {z} | \boldsymbol {y}) \| p (\boldsymbol {z})\right), \end{array} \tag {33} \\ \end{array} +$$ + +where $D_{\mathrm{KL}}$ denotes KL divergence and the encoder $q_{\phi}(z|\mathbf{y})$ is introduced to approximate the true posterior $p(z|\mathbf{y})$ . The decoder $p_{\theta}$ and encoder $q_{\phi}$ are usually parametrized by NNs. We will omit the parameters $\theta, \phi$ in notations when appropriate. + +The parameters of the VAE can be learned with stochastic gradient variational Bayes. With Gaussian latent variables, the KL term of $\mathcal{L}$ has closed form, while the first term can be evaluated by drawing samples from the approximate posterior $q_{\phi}$ using the reparameterization trick (Kingma & Welling, 2013), then, optimizing the evidence lower bound (ELBO) $\mathbb{E}_{\boldsymbol{y} \sim \mathcal{D}}(\mathcal{L}(\boldsymbol{y}))$ with data $\mathcal{D}$ , we train the VAE efficiently. + +Conditional VAE (CVAE) (Sohn et al., 2015; Kingma et al., 2014) adds a conditioning variable $C$ , usually a class label, to standard VAE (See Figure 1). With the conditioning variable, CVAE can give better reconstruction of each class. The variational lower bound is + +$$ +\log p (\boldsymbol {y} | \boldsymbol {c}) \geq \mathbb {E} _ {\boldsymbol {z} \sim q} \log p (\boldsymbol {y} | \boldsymbol {z}, \boldsymbol {c}) - D _ {\mathrm {K L}} (q (\boldsymbol {z} | \boldsymbol {y}, \boldsymbol {c}) \| p (\boldsymbol {z} | \boldsymbol {c})). \tag {34} +$$ + +The conditioning on $C$ in the prior is usually omitted (Doersch, 2016), i.e., the prior becomes $Z \sim \mathcal{N}(\mathbf{0},\mathbf{I})$ as in standard VAE, since the dependence between $C$ and the latent representation is also modeled in the encoder $q$ . Moreover, unconditional prior in fact gives better reconstruction because it encourages learning representation independent of class, similarly to the idea of beta-VAE (Higgins et al., 2017). + +As mentioned, identifiable VAE (iVAE) (Khemakhem et al., 2020a) provides the first identifiability result for VAE, using auxiliary variable $X$ . It assumes $Y \perp X|Z$ , that is, $p(\boldsymbol{y}|\boldsymbol{z},\boldsymbol{x}) = p(\boldsymbol{y}|\boldsymbol{z})$ . The variational lower bound is + +$$ +\begin{array}{l} \log p (\boldsymbol {y} \mid \boldsymbol {x}) \geq \log p (\boldsymbol {y} \mid \boldsymbol {x}) - D _ {\mathrm {K L}} (q (\boldsymbol {z} \mid \boldsymbol {y}, \boldsymbol {x}) \| p (\boldsymbol {z} \mid \boldsymbol {y}, \boldsymbol {x})) \tag {35} \\ = \mathbb {E} _ {\boldsymbol {z} \sim q} \log p _ {\boldsymbol {f}} (\boldsymbol {y} | \boldsymbol {z}) - D _ {\mathrm {K L}} (q (\boldsymbol {z} | \boldsymbol {y}, \boldsymbol {x}) \| p _ {\boldsymbol {T}, \boldsymbol {\lambda}} (\boldsymbol {z} | \boldsymbol {x})), \\ \end{array} +$$ + +where $Y = f(Z) + \epsilon$ , $\epsilon$ is additive noise, and $Z$ has exponential family distribution with sufficient statistics $T$ and parameter $\lambda(X)$ . Note that, unlike CVAE, the decoder does not depend on $X$ due to the independence assumption. + +Here, identifiability of the model means that the functional parameters $(\pmb {f},\pmb {T},\pmb {\lambda})$ can be identified (learned) up to certain simple transformation. Further, in the limit of $\epsilon \rightarrow 0$ , iVAE solves the nonlinear ICA problem of recovering $Z = f^{-1}(Y)$ . + +# C EXPOSITIONS + +The order of subsections below follows that they are referred in the main text. + +# C.1 LIST OF ASSUMPTIONS + +The following is a list of assumptions required by our identification theory, with comments on their roles and subtleties. + +(G1) additive noise model is needed to ensure the existence of PtSs. (G1') is equivalent to (G1), and is introduced for better presentation, e.g., it connects to (G2) and (M1) through injectivity. +(M1) and (D1) are inherited from iVAE and are required for model (parameter) identifiability (identifying $f_{t}$ up to affine mapping), which does not imply CATE identification in general. Arguably here the most important is that the mapping $f_{t}$ from latent $Z$ to outcome $Y$ is injective, or else some information of $Z$ is in principle unrecoverable. These two conditions are not required by Proposition 2 which does not need model identifiability. +(M2), together with overlapping PtSs, is important to address limited overlap of $X$ and can be seen as a weak form of OOD generalization. +(M3') means 1) we need to know or learn the distribution of hidden noise e and 2) noiseless prior. This simplifies the proof of identification, but when implementing the VAE as an estimation method, both noises are learned. +(D2), or in fact (D2'), strengthens the model identifiability to determine both $\pmb{f}_0$ and $\pmb{f}_1$ up to the same affine mapping, which replaces the balance of PS. +(G2) is required by Proposition 2 but not Theorem 1. It is no less important than (G1'), because the core intuition of our method is that (G2) should hold approximately. Sec. C.3 contains several detailed real-world examples on (G2). + +# C.2 DETAILS AND EXPLANATIONS ON INTACT-VAE + +Our goal is to build a model that can be learned by VAE from observational data to obtain a PGS, or more ideally bPGS, via the latent variable $Z$ . That is, a generative prognostic model. Generative models are useful to solve the inverse problem of recovering PGSs. + +With the above goal, the generative model of our VAE is built as (3). Conditioning on $X$ in the joint model $p(\pmb{y},\pmb{z}|\pmb{x},t)$ reflects that our estimand is CATE given $X$ . Modeling the score by a conditional distribution rather than a deterministic function is more flexible. + +The ELBO of our model can be derived from standard variational lower bound as following: + +$$ +\begin{array}{l} \log p (\boldsymbol {y} | \boldsymbol {x}, t) \geq \log p (\boldsymbol {y} | \boldsymbol {x}, t) - D _ {\mathrm {K L}} (q (\boldsymbol {z} | \boldsymbol {x}, \boldsymbol {y}, t) \| p (\boldsymbol {z} | \boldsymbol {x}, \boldsymbol {y}, t)) \\ \begin{array}{l} = \operatorname {E} _ {\boldsymbol {z} \sim q} \log p (\boldsymbol {y} | \boldsymbol {z}, t) - D _ {\mathrm {K L}} (q (\boldsymbol {z} | \boldsymbol {x}, \boldsymbol {y}, t) \| p (\boldsymbol {z} | \boldsymbol {x}, t)). \end{array} \tag {36} \\ \end{array} +$$ + +We naturally have an identifiable conditional VAE (CVAE), as the name suggests. Note that (3) has a similar factorization with the generative model of iVAE (Khemakhem et al., 2020a), that is $p(\pmb{y}, \pmb{z}|\pmb{x}) = p(\pmb{y}|\pmb{z})p(\pmb{z}|\pmb{x})$ ; the first factor does not depend on $X$ . Further, since we have the conditioning on $T$ in both the factors of (3), our VAE architecture is a combination of iVAE and CVAE (Sohn et al., 2015; Kingma et al., 2014), with $T$ as the conditioning variable. See Figure 1 for the comparison in terms of graphical models. The core idea of iVAE is reflected in our model identifiability (see Lemma 1). + +Please do not confuse the DGP (G1) and the generative model (3) of Intact-VAE. The former is the causal model, but the latter is not (at least before we show the TE identifications in Sec. 3.2). In our case, the generative model is built as a way to learn the scores through the correspondence to (2). + +In particular, note that conditionally balanced representation $Z \bot T|X$ is possible under the generative model. This requires a violation of causal faithfulness, so that there are other conditional independence relations, which are not generally implied by the graphical model. Our method, based on iVAE, which achieves ICA, performs nonlinear ICA to recover the scores. In fact, ICA procedures often violate causal faithfulness, because it requires finding causes from effects. Also, the violation of causal faithfulness is not caused by the generative model (which is shown in Figure 1), because the representation is learned by the encoder, and $Z \bot T|X$ is enforced by $\beta$ . + +# C.3 DISCUSSIONS AND EXAMPLES OF (G2) + +We focus on univariate outcome on $\mathbb{R}$ which is the most practical case and the intuitions apply to more general types of outcomes. Then, $i$ , the mapping between $\mu_0$ and $\mu_{1}$ , is monotone, i.e., either increasing or decreasing. The increasing $i$ means, if a change of the value of $X$ increases (decreases) the outcome in the treatment group, then it is also the case for the controlled group. This is often true because the treatment does not change the mechanism how the covariates affect the outcome, under the principle of "independence of causal mechanisms (ICM)" (Janzing & Scholkopf, 2010). The decreasing $i$ corresponds to another common interpretation when ICM does not hold. Now, the treatment does change the way covariates affect $Y$ , but in a global manner: it acts like a "switch" on the mechanism: the same change of $X$ always has opposite effects on the two treatment groups. + +We support the above reasoning by real world examples. First we give two examples where $\mu_0$ and $\mu_{1}$ are both monotone increasing. This, and also that both $\mu_t$ are monotone decreasing, are natural and sufficient conditions for increasing $i$ , though not necessary. The first example is form Health. (Starling et al., 2019) mentions that gestational age (length of pregnancy) has a monotone increasing effect on babies' birth weight, regardless of many other covariates. Thus, if we intervene on one of the other binary covariates (say, $t =$ receive healthcare program or not), both $\mu_t$ should be monotone increasing in gestational age. The next example is from economics. (Gan & Li, 2016) shows that job-matching probability is monotone increasing in market size. Then, we can imagine that, with $t =$ receive training in job finding or not, the monotonicity is not changed. Intuitively, the examples correspond to two common scenarios: the causal effects are accumulated though time (the first example), or the link between a covariate and the outcome is direct and/or strong (the second example). + +Examples for decreasing $i$ are rarer and the following is a bit deliberate. This example is also about babies' birth weight as the outcome. (Abrevaya et al., 2015) shows that, with $t =$ mother smokes + +or not and $X =$ mother's age, the CATE $\tau(\pmb{x})$ is monotone decreasing for $20 < x < 26$ (smoking decreases birth weight, and the absolute causal effect is larger for older mother). On the other hand, it is shown that birth weight slightly increases (by about $100\mathrm{g}$ ) in the same age range in a surveyed population (Wang et al., 2020). Thus, it is convinced that, smoking changes the the tendency of birth weight w.r.t mother's age from increasing to decreasing, and gives the large decreasing of birth weight (by about $300\mathrm{g}$ ) as its causal effect. This could be understood: the negative effects of smoking on mother's heath and in turn on birth weight are accumulated during the many years of smoking. + +# C.4 COMPLEMENTARITY BETWEEN THE TWO IDENTIFICATIONS + +We examine the complementarity between the two identifications more closely. The conditions $(\mathbf{M3}) / (\mathbf{M3^{\prime}})$ and $(\mathbf{G2}) / (\mathbf{D2^{\prime}})$ form two pairs, and are complementary inside each pair. The first pair matches model and truth, while the second pair restricts the discrepancy between the treatment groups. In Theorem 1, $(\mathbf{G2})$ $(\mathfrak{p}_0 = \mathfrak{p}_1)$ is replaced by $(\mathbf{D2^{\prime}})$ which instead makes $\mathcal{A}_0 = \mathcal{A}_1\coloneqq \mathcal{A}$ in (5). And $(\mathbf{D2^{\prime}})$ is easily satisfied with high-dimensional $X$ , even if the possible values of $C,d$ are restricted to $C = cI$ and $d = 0$ (see below). On the other hand, $p_{\epsilon} = p_{\mathrm{e}}$ in $(\mathbf{M3^{\prime}})$ is impractical, but it ensures that $p_{\theta}(\boldsymbol {y}|\boldsymbol {x},t) = p(\boldsymbol {y}|\boldsymbol {x},t)$ so that (5) can be used. In Sec. 4.1, we consider practical estimation method and introduce the regularization that encourages learning a PGS similar to bPGS so that $p_{\epsilon} = p_{\mathrm{e}}$ can be relaxed. + +# C.5 IDEAS AND CONNECTIONS BEHIND THE ELBO (7) + +Bayesian approach is favorable to express the prior belief that bPGSs exist and the preference for them, and to still have reasonable posterior estimation when the belief fails and learning general PGS is necessary. This is the causal importance of VAE as an estimation method for us. By the unconditional but still flexible $\Lambda$ , and also the identifications, the ELBO encourages the recovery of an approximate bPGS as the posterior, which still learns the dependence on $T$ if necessary. Moreover, $\beta$ expresses our additional knowledge (or, inductive bias) about whether or not there exist approximate bPGSs (e.g., from domain expertise). + +In fact, $\beta$ connects our VAE to $\beta$ -VAE (Higgins et al., 2017), which is closely related to noise and variance control (Doersch, 2016, Sec. 2.4)(Mathieu et al., 2019). + +Considerations on noise modeling. In Theorem 1, with large and mismatched noises (then $(\mathbf{M3^{\prime}})$ is easily violated), the identification of outcome model $\pmb {f}_t = \pmb {j}_t\circ \pmb{v}^{-1}$ would fail, and, in turn, the prior would learn confounding bias, by confusing the causal effect of $T$ on $\mathfrak{p}_T$ and the correlation between $T$ and $X$ . This is another reason to prefer $\lambda_0 = \lambda_1$ , besides balancing. On the other hand, the posterior conditioning on $Y$ provides information of noise $\mathbf{e}$ , and it is shown in (Bonhomme & Weidner, 2021) that posterior effect estimation has minimum worst-case error under model misspecification (of the noise and prior, in our case). + +Under large $\mathbf{e}$ , a relatively small $\beta$ implicitly encourages $\pmb{g}$ smaller than the scale of $\mathbf{e}$ , through stressing the third term in ELBO (7). And the model as a whole would still learn $p(\pmb{y}|\pmb{x},t)$ well, because the uncertainty of $\mathbf{e}$ can be moved to and modeled by the prior. This is why $\pmb{k}$ is not set to zero because learnable prior noise (variance) allows us to implicitly control $\pmb{g}$ via $\beta$ . Intuitively, smaller $\pmb{g}$ strengthens the correlation between $Y$ and $Z$ in our model, and this naturally reflects that posterior conditioning on $Y$ is more important under larger $\mathbf{e}$ . Hopefully, precise learning of outcome noise (M3') is not required, as in Proposition 2. + +Now, it is clear that $\beta$ naturally controls at the same time noise scale and balancing. And the regularization can also be understood as an interpolation between Proposition 2 and Theorem 1: relying on bPGS, or on model identifiability; learning loosely, or precisely, the outcome regression. When the noise scale is different from truth, there would be error due to imperfect recovery of $j$ . Sec. 4.2 shows that this error and balancing form a trade-off, which is adjusted by $\beta$ . + +Importance of balancing from misspecification view. If we must learn an unapproximate bPGS, we have larger misspecification under a balanced prior and rely more on $Y$ in the posterior. Both are bad because it is shown in (Bonhomme & Weidner, 2021) that posterior only helps under bounded (small) misspecification, and posterior estimator has higher variance than prior estimator (see below + +for an extreme case). Again, we want a regularizer to encourage learning of bPGS, so that we can explore the middle ground: relatively low-dimensional $\mathfrak{p}$ , or relatively small $\mathbf{e}$ . + +Example. Assume the true outcome noise is (near) zero. By setting $\epsilon \rightarrow 0$ in our model, the posterior $p_{\theta}(z|\boldsymbol{x},\boldsymbol{y},t) = p_{\theta}(\boldsymbol{y},\boldsymbol{z}|\boldsymbol{x},t) / p_{\theta}(\boldsymbol{y}|\boldsymbol{x},t)$ degenerates to $\pmb{f}_{T}^{-1}(Y) = \pmb{f}_{T}^{-1}(\pmb{j}_{T}(\mathfrak{p}_{T})) = \pmb{v}^{-1}(\mathfrak{p}_{T})$ , a factual PGS. However, $\pmb{f}_{1-T}^{-1}(Y) = \pmb{f}_{1-T}^{-1}(\pmb{j}_{T}(\mathfrak{p}_{T})) = \pmb{v}^{-1}(\pmb{j}_{1-T}^{-1}\circ \pmb{j}_{T}(\mathfrak{p}_{T}))\neq \pmb{v}^{-1}(\mathfrak{p}_{1-T})$ , the score recovered by posterior does not work for counterfactual assignment! The problem is, unlike $X$ , the outcome $Y = Y(T)$ is affected by $T$ , and, the degenerated posterior disregards the information of $X$ from the prior and depends exclusively on factual $(Y,T)$ . + +# C.6 CONSISTENCY OF VAE AND PRIOR ESTIMATION + +The following is a refined version of Theorem 4 in Khemakhem et al. (2020a). The result is proved by assuming: i) our VAE is flexible enough to ensure the ELBO is tight (equals to the true log likelihood) for some parameters; ii) the optimization algorithm can achieve the global maximum of ELBO (again equals to the log likelihood). + +Proposition 6 (Consistency of Intact-VAE). Given model (3)&(6), and let $p^*(\pmb{x}, \pmb{y}, t)$ be the true observational distribution, assume + +i) there exists $(\bar{\theta},\bar{\phi})$ such that $p_{\bar{\theta}}(\boldsymbol {y}|\boldsymbol {x},t) = p^{*}(\boldsymbol {y}|\boldsymbol {x},t)$ and $p_{\bar{\theta}}(\boldsymbol {z}|\boldsymbol {x},\boldsymbol {y},t) = q_{\bar{\phi}}(\boldsymbol {z}|\boldsymbol {x},\boldsymbol {y},t)$ +ii) the ELBO $\mathbb{E}_{\mathcal{D}\sim p^*}(\mathcal{L}(\pmb {x},\pmb {y},t;\pmb {\theta},\phi))$ (4) can be optimized to its global maximum at $(\pmb {\theta}^{\prime},\pmb{\phi}^{\prime})$ + +Then, in the limit of infinite data, $p_{\pmb{\theta}^{\prime}}(\pmb{y}|\pmb{x},t) = p^{*}(\pmb{y}|\pmb{x},t)$ and $p_{\pmb{\theta}^{\prime}}(\pmb{z}|\pmb{x},\pmb{y},t) = q_{\phi^{\prime}}(\pmb{z}|\pmb{x},\pmb{y},t)$ . + +Proof. From i), we have $\mathcal{L}(\pmb{x},\pmb{y},t;\bar{\theta},\bar{\phi}) = \log p^{*}(\pmb{y}|\pmb{x},t)$ . But we know $\mathcal{L}$ is upper-bounded by $\log p^{*}(\pmb{y}|\pmb{x},t)$ . So, $\mathbb{E}_{\mathcal{D}\sim p^{*}}(\log p^{*}(\pmb{y}|\pmb{x},t))$ should be the global maximum of the ELBO (even if the data is finite). + +Moreover, note that, for any $(\theta ,\phi)$ , we have $D_{\mathrm{KL}}(p_{\pmb{\theta}}(\pmb {z}|\pmb {x},\pmb {y},t)\| q_{\phi}(\pmb {z}|\pmb {x},\pmb {y},t)\geq 0$ and, in the limit of infinite data, $\mathbb{E}_{\mathcal{D}\sim p^*}(\log p_\pmb {\theta}(\pmb {y}|\pmb {x},t))\leq \mathbb{E}_{\mathcal{D}\sim p^*}(\log p^* (\pmb {y}|\pmb {x},t))$ . Thus, the global maximum of ELBO is achieved only when $p_{\pmb{\theta}}(\pmb {y}|\pmb {x},t) = p^{*}(\pmb {y}|\pmb {x},t)$ and $p_{\pmb{\theta}}(\pmb {z}|\pmb {x},\pmb {y},t) = q_{\phi}(\pmb {z}|\pmb {x},\pmb {y},t)$ + +Consistent prior estimation of CATE follows directly from the identifications. The following is a corollary of Theorem 1. + +Corollary 1. Under the conditions of Theorem 1, further require the consistency of Intact-VAE. Then, in the limit of infinite data, we have $\mu_t(X) = \pmb{f}_t(\pmb{h}_t(X))$ where $\pmb{f},\pmb{h}$ are the optimal parameters learned by the VAE. + +# C.7 PRE/POST-TREATMENT PREDICTION + +Sampling posterior requires post-treatment observation $(\pmb{y},t)$ . Often, it is desirable that we can also have pre-treatment prediction for a new subject, with only the observation of its covariate $X = x$ . To this end, we use the prior as a pre-treatment predictor for $Z$ : replace $q_{\phi}$ with $p_{\Lambda}$ in (8) and get rid of the outer average taken on $\mathcal{D}$ ; all the others remain the same. We also have sensible pre-treatment prediction even without true low-dimensional PSs, because $p_{\Lambda}$ gives the best balanced approximation of the target PGS. The results of pre-treatment prediction are given in the experimental section E. + +# D MORE ON RELATED WORK + +# D.1 CFR AND CEVAE + +CFR and CEVAE are well-known machine learning methods for CATE estimation. Here we make detailed comparisons to them. + +# D.1.1 COMPARISONS WITH CFR + +Our method is related to CFR in two ways. Theoretically, our bounds in Sec. 4.2 resemble those in Shalit et al. (2017). But we bound CATE error, while CFR bounds PEHE; thus, our bounds give + +conditional balancing while CFR only has unconditional balancing. See Sec. D.3 for more on the bounds. Conceptually, CFR is loosely related to our method because it also learns a representation as an outcome predictor, as mentioned in the follow-up Johansson et al. (2020). However, CFR does not have a generative model, so their representation is not formally related to PGSs. Moreover, CFR does not account the outcome noise, while the uncertainty due to the noise is accounted by our VAE. + +# D.1.2 COMPARISONS WITH AND CRITICISMS OF CEVAE + +Motivation CEVAE is motivated by exploiting proxy variables, and its intuition is that the hidden confounder $U$ can be recovered by VAE from proxy variables. + +Our method is motivated by prognostic scores (Hansen, 2008), and our model is directly based on equations (2) which identifies CATE. There is no need to recover the hidden confounder in our framework. + +Architecture Our model is naturally based on (2), particularly the independence properties of PGS. And as a consequence, our VAE architecture is a natural combination of iVAE and CVAE (see Figure 1). Our ELBO (4) is derived by the standard variational lower bound. + +On the other hand, the architecture of CEVAE is more ad hoc and complex. Its decoder follows the graphical model of descendant proxy mentioned above, but adds an ad hoc component to mimic TARnet (Shalit et al., 2017): it uses separated NNs for the two potential outcomes. We tried this idea on the IHDP dataset, and, as we show in Sec. 5.2, it has basically no merits for our method, because we have a principled way for balancing. + +The encoder of CEVAE is even more complex. To have post-treatment estimation, $q(T|X)$ and $q(Y|X,T)$ are added into the encoder. As a result, the ELBO of CEVAE has two additional likelihood terms corresponding to the two distributions. However, in our Intact-VAE, post-treatment estimation is given naturally by our standard encoder, thanks to the correspondence between our model and (2). + +Justification We have given the identifications and bounds of our method in this paper. Moreover, we carefully distinguish assumptions on the DGP and assumptions on our model, and identify the assumptions that are important for causality. There are few theoretical justifications for CEVAE. Their Theorem 1 directly assumes the joint distribution $p(\boldsymbol{x}, \boldsymbol{y}, t, \boldsymbol{u})$ including hidden confounder $U$ is recovered, then identification is trivial by using the standard adjustment equation. + +However, the challenge is exactly that the confounder is hidden, unobserved. Many years of work have been done in causal inference to derive conditions under which hidden confounder can be (partially) recovered (Greenland, 1980; Kuroki & Pearl, 2014; Miao et al., 2018). In particular, Miao et al. (2018) gives the most recent identification result for proxy setting, which requires very specific two proxies structure, and other completeness assumptions on distributions. Thus, it is unreasonable to believe that VAE, with simple descendant proxies, can recover the hidden confounder. Indeed, Rissanen & Marttinen (2021) recently give evidence that the method often fails. + +Moreover, the identifiability of VAE itself is a challenging problem. As mentioned in Introduction, Khemakhem et al. (2020a) is the first identifiability result for VAE, but it only identifies an equivalence class, not a unique representation function. Thus, it is also unconvincing that VAE can learn a unique latent distribution, without certain assumptions. As we show in Sec. 5.1, for relatively simple synthetic datasets, CEVAE can not robustly recover the hidden confounder, even only up to transformation, while our method can (though, again, this is not needed for our method). + +# D.2 INJECTIVITY, INVERTIBILITY, MONOTONICITY, AND OVERLAP + +Let us note that any injective mapping defines an invertible mapping, by restricting the domain of the inverse function to the range of the injective mapping. Also note that injectivity is weaker than monotonicity; a monotone mapping can be defined by an injective and order-preserving mapping between ordered sets. Particularly, an injective and continuous mapping on $\mathbb{R}$ is monotone, and many works in econometrics give examples of this case. + +Many classical and recent works (with many real world applications, see C.1) in econometrics are based on monotonicity. Particularly, there is a long line of work based on monotonicity of treatment (Huber & Wüthrich, 2018). More related to our method is another line of work based on monotonicity of outcome, see (Chernozhukov & Hansen, 2013) and references therein for early results. Some recent works apply monotonicity of outcome to nonparametric IV regression (NPIV) (Freyberger & Horowitz, 2015; Li et al., 2017; Chetverikov & Wilhelm, 2017), where the structural equation of the outcome is assumed to be $Y = f(T) + \epsilon$ , and $f$ is monotone and $T$ (the treatment) is often continuous. Particularly, (Chetverikov & Wilhelm, 2017) combines monotonicity of both treatment and outcome, and (Freyberger & Horowitz, 2015) considers discrete treatment (note continuity or differentiability is not necessary for monotonicity). NPIV with monotone $f$ is closely related to our method, but the difference is that $T$ is replaced by a PGS in our method, and the PGS is recovered from observables. Finally, as we mentioned in Sec. 3.2, monotonicity is a kind of shape restriction which also includes, e.g., concavity and symmetry and attracts recent interests (Chetverikov et al., 2018). However, most of NPIV works focus on identifying $f$ but not directly on TEs, and we do not know any works that use monotonicity to address limited overlap. + +Recently in machine learning, (Johansson et al., 2019; Zhang et al., 2020b; Johansson et al., 2020) note the relationship between invertibility and overlap. As mentioned, (Johansson et al., 2020) gives bounds without overlap, but the relationship between invertibility and overlap is not explicit in their theory. (Johansson et al., 2019) explicitly discuss overlap and invertibility, but does not focus on TEs. (Zhang et al., 2020b) assumes overlap so that identification is given, and then focuses on learning overlapping representation that preserves the overlapping the covariate. However, it does not relate invertibility and overlap, but uses invertible representation function to preserve exchangeability given the covariate, and linear outcome regression to simply the model. Related, our identifications required (M2), of which linearity of PGS and representation function is a sufficient condition, and our outcome model is injective, to preserve the exchangeability given the PGS. Thus, our method works under more general setting, and arguably under weaker conditions. + +# D.3 ADDITIONAL NOTES ON NOVELTIES OF THE BOUNDS IN SEC. 4.2 + +We give details and additional points regarding the novelties. Lu et al. (2020) also use a VAE and derive bounds most related to ours. Still, our method strengthens Lu et al. (2020), in a simpler and principled way: we distinguish true score and latent $Z$ and show that identification is the link; considering both prior and posterior, we show the symmetric nature of the balancing term and relate it to our KL term in (7), without ad hoc regularization; moreover, we consider outcome noise modeling which is a strength of VAE and relate it to hyperparameter $\beta$ . Particularly, in (Lu et al., 2020), latent variable $Z$ is confused with the true representation $(\mathfrak{p}_t$ up to invertible mapping in our case). Without identification, the method in fact has unbounded error. Note that Shalit et al. (2017) do not consider connection to identification and noise modeling as well. The error between $\hat{\tau}_{f}$ and $\tau_{j}$ , which we bound, is due to the unknown outcome noise that is not accounted by our Theorem 1; thus, the theory in Sec. 4.2 is complementary to that in Sec. 3.2. Finally, $\beta$ is a trade-off between the conditional balance of learned PGS (affected by $f_{t}$ ), and precision/effective sample size of outcome regression—and can be seen as the probabilistic counterpart of Tarr & Imai (2021) and Kallus et al. (2018). + +# E DETAILS AND ADDITIONS OF EXPERIMENTS + +We evaluate the post-treatment performance on training and validation set jointly (This is non-trivial. Recall the fundamental problem of causal inference). The treatment and (factual) outcome should not be observed for pre-treatment predictions, so we report them on a testing set. See also Sec. C.7 the pre/post-treatment distinction. + +# E.1 SYNTHETIC DATA + +We detail how the random parameters in the DGPs are sampled. $\mu_{i}$ and $\sigma_{i}$ are uniformly sampled in range $(-0.2, 0.2)$ and $(0, 0.2)$ , respectively. The weights of linear functions $h, k, l$ are sampled from standard normal distributions. The NNs $f_{0}, f_{1}$ use leaky ReLU activation with $\alpha = 0.5$ and are of 3 to 8 layers randomly, and the weights of each layer are sampled from $(-1.1, -0.9)$ . To have a large but still reasonable outcome variance, the output of $f_{t}$ is divided by $C_{t} \coloneqq \operatorname{Var}_{\{\mathcal{D}|T = t\}}(f_{t}(Z))$ . When generating DGPs with dependent noise, the variance parameter $g_{t}$ for the outcome is generated by adding a softplus layer after respective $f_{t}$ , and then normalized to range $(0, 2)$ . + +We use the original implementation of $\mathrm{CFR}^3$ . Very possibly due to bugs in implementation, the CFR version using Wasserstein distance has error of TensorFlow type mismatch on our synthetic dataset, and the CFR version using MMD diverges with very large loss value on one or two of the 10 random DGPs. We use MMD version, and, when the divergence of training happens, report the results from trained models before diver- + +gence, which still give reasonable results. We search the balancing parameter alpha in [0.16, 0.32, 0.64, 0.8, 1.28], and fix other hyperparameters as they were in the default config file. + +![](images/236ff562f7cc1a32e0614812961602bc1e7922edb49f91f71a1a74e98c9b757c.jpg) +Figure 4: Degree of limited overlap w.r.t $\omega$ . + +We characterize the degree of limited overlap by examining the percentage of observed values $\pmb{x}$ that give probability less than 0.001 for one of $p(t|x)$ . The threshold is chosen so that all sample points near those values $\pmb{x}$ almost certainly belong to a single group since we have 500 sample points in total. If we regard a DGP as very limited-overlapping when the above percentage is larger than 50%, then, as shown in Figure 4, non (all) of the 10 DGPs are very limited-overlapping with $\omega = 6$ ( $\omega = 22$ ). + +For diversity of the datasets, we set $g_{t}(W) = 1$ in DGPs in Appendix. Figure 5 shows, with $\dim(Z) = 200$ , our method works better than CFR under $\dim(W) = 1$ and as well as CFR under $\dim(W) > 1$ . As mentioned in Conclusion, this indicates that the theoretical requirement of injective $f_{t}$ in our model might be relaxed. Interest- + +ingly, larger $\beta$ seems to give better results here, this is understandable because $\beta$ controls the tradeoff between fitting and balancing, and the fitting capacity of our decoder is much increased with $\dim(Z) = 200$ . Note that the above observations on $\dim(Z)$ are not caused by fixing $g_t(W) = 1$ + +(compare Figure 5 with Figure 6 below). + +Figure 6 shows the importance of noise modeling. Compared to Figure 2 in the main text, where $g_{t}(W)$ in DGPs is not fixed, our method works worse here, particularly for large $\beta$ , because now noise modeling $(g, k$ in the ELBO) only adds unnecessary complexity. The changes of performance w.r.t different $\omega$ should be unrelated to overlap levels, but to the complexity of random DGPs; compare to Figure 5, with larger NNs in our VAE, the changes become much insignificant. The drop of error for $\dim(W) > 3$ is due to the randomness of $f$ in (36). In Sec. 2.2, we saw that the 2-dimensional bPGS $\mathfrak{p} := (\mu_0(X), \mu_1(X))$ always exists under additive noise models. Thus, when $\dim(W) > 2$ , our method tries to recover that $\mathfrak{p}$ , and generally performs not worse than under $\dim(W) = 2$ , but still not better than under $\dim(W) = 1$ . + +![](images/f8c9a85f2cbfd7e1d5206d4f7e1db83985373daaad13d58687f225f3aeb8b407.jpg) +Figure 5: $\sqrt{\epsilon_{pehe}}$ on synthetic dataset, with $g_{t}(W) = 1$ in DGPs, and $\dim (Z) = 200$ in our model. Error bar on 10 random DGPs. + +standable because $\beta$ controls the trade-off of our decoder is much increased with $Z$ , are not caused by fixing $g_{t}(W) = 1$ + +![](images/d6a89c687eff226e04c44e74bffe2dae4dae4d41ff31c41d1b8d607ec8b834c5.jpg) +Figure 6: $\sqrt{\epsilon_{pehe}}$ on synthetic dataset, with $g_{t}(W) = 1$ in DGPs. Error bar on 10 random DGPs. + +Figure 7 shows results of ATE estimation. Notably, CFR drops performance w.r.t degree of limited overlap. Our method does not show this tendency except for very large $\beta$ ( $\beta = 3$ ). This might be another evidence that CFR and its unconditional balancing overfit to PEHE (see Sec. 5.2). Also note that, under $\dim(W) = 1$ , $\beta = 3$ gives the best results for ATE although it does not work well for PEHE, and we do not know if this generalizes to the conclusion that large $\beta$ gives better ATE estimation under the existence of bPGS, but leave this for future investigation. + +Figure 8 shows results of pre-treatment prediction. In left panel, both our method and CFR perform only slightly worse than post-treatment. This is reasonable because here we have bPGS $W$ with $\dim(W) = 1$ , there is no need to learn PGS. In the right panel, we also do not see significant drop of performance compared to post-treatment. This might be due to the hardness of learning approximate bPGS in this dataset, and posterior estimation does not give much improvements. + +You can find more plots for latent recovery at the end of the paper. + +![](images/4b3405d0b90adf2a6f1e0a0945e7d81c8b6236fd48e298ec036c86e2f82e26ad.jpg) +Figure 7: $\epsilon_{ate}$ on synthetic dataset, with $g_{t}(W) = 1$ in DGPs. Error bar on 10 random DGPs. + +# E.2 IHDP + +IHDP is based on an RCT where each data point represents a baby with 25 features (6 continuous, 19 binary) about their birth and mothers. Race is introduced as a + +confounder by artificially removing all treated children with nonwhite mothers. There are 747 subjects left in the dataset. The outcome is synthesized by taking the covariates (features excluding Race) as input, hence unconfoundedness holds given the covariates. Following previous work, we split the dataset by 63:27:10 for training, validation, and testing. Note, there is no ethical concerns here, because the treatment assignment mechanism is artificial by processing the data. Also our results are only quantitative and we make no ethical conclusions. + +The generating process is as following (Hill, 2011, Sec. 4.1). + +$$ +Y (0) \sim \mathcal {N} \left(e ^ {\boldsymbol {a} ^ {T} (X + \boldsymbol {b})}, 1\right), \quad Y (1) \sim \mathcal {N} \left(\boldsymbol {a} ^ {T} X - c, 1\right), \tag {37} +$$ + +where $\pmb{a}$ is a random coefficient, $\pmb{b}$ is a constant bias with all elements equal to 0.5, and $c$ is a random parameter adjusting degree of overlapping between the treatment groups. As we can see, $\pmb{a}^T\pmb{X}$ is a true bPGS. As mentioned in the main text, the bPGS might be discrete. Thus, this experiment also shows the importance of VAE, even if an apparent bPGS exists. Under discrete PSs, training an regression based on Proposition 2 is hard, but our VAE works well. + +The two added components in the modified version of our method are as following. First, we build the two outcome + +functions $f_{t}(Z), t = 0,1$ in our learning model (3), using two separate NNs. Second, we add to our ELBO (4) a regularization term, which is the Wasserstein distance (Cuturei, 2013) between $\mathbb{E}_{\mathcal{D} \sim p(X|T = t)} p_{\Lambda}(Z|X), t \in \{0,1\}$ . As shown in Table 2, best unconditional balancing parameter is 0.1. Larger parameters gives much worse PEHE and does not improve ATE estimation. Smaller parameters are more reasonable but still do not improve the results. The overall tendency is clear. Compared to ours, CFR with its unconditional balancing does not improve ATE estimation, it may improve PEHE results with fine tuned parameter, but possibly at the price of worse ATE estimation. + +![](images/469047f2d71070feb48f70e492779cbeecc842ea96529e0f01a349afea3ac340.jpg) +Figure 8: Pre-treatment $\sqrt{\epsilon_{pehe}}$ on synthetic dataset. Error bar on 10 random DGPs. + +Table 3 shows pre-treatment results. All methods gives reasonable results. + +Table 2: Performance of modified version with different unconditional balancing parameter, the values of which are shown after "Mod". + +
MethodOursMod. 1Mod. 0.2Mod. 0.1Mod. 0.05Mod. 0.01CFR
εate.177±.007.196±.008.177±.007.167±.005.177±.006.179±.006.25±.01
√εpehe.843±.0301.979±.0821.116±.046.777±.026.894±.039.841±.029.71±.02
+ +Table 3: Pre-treatment Errors on IHDP over 1000 random DGPs. We report results with $\dim(Z) = 10$ . Bold indicates method(s) which is significantly better. The results are taken from Shalit et al. (2017), except GANITE (Yoon et al., 2018) and CEVAE (Louizos et al., 2017). + +
MethodTMLEBNNCFRCFCEVAEGANITEOurs
pre-εateNA.42±.03.27±.01.40±.03.46±.02.49±.05.211±.011
pre-√εpeheNA2.1±.1.76±.023.8±.22.6±.12.4±.4.946±.048
+ +# E.3 POKEC SOCIAL NETWORK DATASET + +This experiment shows our method is the best compared with the methods specialized for networked deconfounding, a challenging problem in its own right. Thus, our method has the potential to work under unobserved confounding, but we leave detailed experimental and theoretical investigation to future. + +Pokec (Leskovec & Krevl, 2014) is a real world social network dataset. We experiment on a semi-synthetic dataset based on Pokec, which was introduced in (Veitch et al., 2019), and use exactly the same pre-processing and generating procedure. The pre-processed network has about 79,000 vertices (users) connected by $1.3 \times 10^{6}$ undirected edges. The subset of users used here are restricted to three living districts which are within the same region. The network structure is expressed by binary adjacency matrix $\pmb{G}$ . Following (Veitch et al., 2019), we split the users into 10 folds, test on each fold and report the mean and std of pre-treatment ATE predictions. We further separate the rest of users (in the other 9 folds) by $6:3$ , for training and validation. + +Each user has 12 attributes, among which district, age, or join date is used as a confounder $U$ to build 3 different datasets, with remaining 11 attributes used as covariate $X$ . Treatment $T$ and outcome $Y$ are synthesised as following: + +$$ +T \sim \operatorname {B e r n} (g (U)), \quad Y = T + 1 0 (g (U) - 0. 5) + \epsilon , \tag {38} +$$ + +where $\epsilon$ is standard normal. Note that district is of 3 categories; age and join date are also discretized into three bins. $g(U)$ , which is a bPGS, maps these three categories and values to $\{0.15, 0.5, 0.85\}$ . + +$\beta$ -Intact-VAE is expected to learn a bPGS from $G$ , $X$ , if we can exploit the network structure effectively. Given the huge network structure, most users can practically be identified by their attributes and neighborhood structure, which means $U$ can be roughly seen as a deterministic function of $G$ , $X$ . This idea is comparable to Assumptions 2 and 4 in (Veitch et al., 2019), which postulate directly that a balancing score can be learned in the limit of infinite large network. To extract information from the network structure, we use Graph Convolutional Network (GCN) (Kipf & Welling, 2017) in conditional prior and encoder of $\beta$ -Intact-VAE. The implementation details are given at the end of this subsection. + +Table 4 shows the results. The pre-treatment $\sqrt{\epsilon_{pehe}}$ for Age, District, and Join date confounders are 1.085, 0.686, and 0.699 respectively, practically the same as the ATE errors. Note that, Veitch et al. (2019) does not give individual-level prediction. + +To extract information from the network structure, we use Graph Convolutional Network (GCN) (Kipf & Welling, 2017) in conditional prior and encoder of $\beta$ -Intact-VAE. A difficulty is that, the network $G$ and covariates $X$ of all users are always needed by GCN, regardless of whether it is in training, validation, or testing phase. However, the separation can still make sense if we take care that the treatment and outcome are used only in the respective phase, e.g., $(y_{m}, t_{m})$ of a testing user $m$ is only used in testing. + +Table 4: Pre-treatment ATE on Pokec. Ground truth ATE is 1, as we can see in (38). "Unadjusted" estimates ATE by $\mathbb{E}_{\mathcal{D}}(y_1) - \mathbb{E}_{\mathcal{D}}(y_0)$ . "Parametric" is a stochastic block model for networked data (Gopalan & Blei, 2013). "Embed-" denotes the best alternatives given by (Veitch et al., 2019). Bold indicates method(s) which is significantly better than all the others. We report results with 20-dimensional latent $Z$ . The results of the other methods are taken from (Veitch et al., 2019). + +
AgeDistrictJoin Date
Unadjusted4.34 ± 0.054.51 ± 0.054.03 ± 0.06
Parametric4.06 ± 0.013.22 ± 0.013.73 ± 0.01
Embedding-Reg.2.77 ± 0.351.75 ± 0.202.41 ± 0.45
Embedding-IPW3.12 ± 0.061.66 ± 0.073.10 ± 0.07
Ours2.08 ± 0.321.68 ± 0.101.70 ± 0.13
+ +GCN takes the network matrix $\pmb{G}$ and the whole covariates matrix $\pmb{X} \coloneqq (\pmb{x}_1^T, \dots, \pmb{x}_M^T)^T$ , where $M$ is user number, and outputs a representation matrix $\pmb{R}$ , again for all users. During training, we select the rows in $\pmb{R}$ that correspond to users in training set. Then, treat this training representation matrix as if it is the covariates matrix for a non-networked dataset, that is, the downstream networks in conditional prior and encoder are the same as in the other two experiments, but take $(R_{m,:})^T$ where $\pmb{x}_m$ was expected as input. And we have respective selection operations for validation and testing. We can still train $\beta$ -Intact-VAE including GCN by Adam, simply setting the gradients of non-selected rows of $\pmb{R}$ to 0. + +Note that GCN cannot be trained using mini-batch, instead, we perform batch gradient decent using full dataset for each iteration, with initial learning rate $10^{-2}$ . We use dropout (Srivastava et al., 2014) with rate 0.1 to prevent overfitting. + +# E.4 EMPIRICAL VALIDATION OF THE BOUNDS IN SEC. 4.2 + +Here we focus on the $D(X)$ term in Theorem 2 because it is directly related to conditional balance. + +In the Figure attached at the end of the paper, the rows correspond to 3 overlap levels from strong to weak $(\omega = 6,14,22$ respectively). The first column shows the histograms of correlation coefficients between $D(X)$ and $\epsilon_{f}(X)$ on 100 random DGPs. The vertical bars in the histograms are 5, 25, 50, 75, 95 percentiles (the values are shown in the table below). The other 10 columns show the plots of distributions of $(D(X),\epsilon_{f}(X))$ for the first 10 DGPs. The correlation coefficient for each DGP is shown as corrcoef $\equiv^{*}$ above each histogram. The plots are in log-log scale, because both $D$ and $\epsilon_{f}$ are single-sided, and most data points concentrate near $(0,0)$ , making the plots bad-looking. + +We have two important observations from the histograms: 1) on the majority of DGPs, there are positive correlations between $D$ and $\epsilon_f$ ; 2) the positive correlation is stronger with weaker overlap (the portion of large correlation increases, and the mean corrcoef are 0.100, 0.110, and 0.121, respectively). + +Thus, our bounds and conditional balance have significance. Not all DGPs have positive correlations, and this is reasonable because our bound (11) has three other terms which can obscure the relation between $D$ and $\epsilon_{f}$ . The DGPs 1, 3, 6, 8, 10 show typical situations when there are positive correlations. + +Table 5: Percentiles of correlation coefficients between $D(X)$ and $\epsilon_f(X)$ on 100 random DGPs. + +
Percentile525507595
ω = 6-0.289-0.0860.0690.2990.609
ω = 14-0.328-0.1240.0550.3370.636
ω = 22-0.274-0.1280.0670.3410.634
+ +# E.5 ADDITIONAL PLOTS ON SYNTHETIC DATASETS + +![](images/e66a7e51fb7d6d028546f7ecf664f53991b95baf287b5975f2d64a3b0d13d1e5.jpg) +Figure 9: Plots of recovered-true latent. Rows: first 10 nonlinear random models, columns: outcome noise level. + +![](images/4277dcd410ee2e9229c77b1d52633cc981132cfd210c0b3c961aa88eff2ef25b.jpg) +Figure 10: Plots of recovered-true latent. Conditional prior depends on $t$ . Rows: first 10 nonlinear random models, columns: outcome noise level. Compare to the previous figure, we can see the transformations for $t = 0, 1$ are not the same, confirming the importance of balanced prior. + +![](images/38a48c64a9f21f6f1a449be1b774e21705c84d627dc3d25d960a89ad755823e6.jpg) + +![](images/ac9a1bd455538c900dd411cd28837263efac086beb759c10f2c34fbc45be038e.jpg) + +![](images/a64756b75d4568da13dda4f4f64e4c8a704443b36ed3349e4cdfb7675a768cab.jpg) + +![](images/f2ca49f2c318ad276680a2b8cbf98f44086b2c7b1416f6ecdb532e11e6f5a95b.jpg) + +![](images/30a374f41e8d6012249e2fb219c40504eab93c2e7c5b1952b4bc47d536bae9f6.jpg) + +![](images/c262f844b7a495cdb783f4659784575cc9a2b66fdafdb88546c555abe348f7ad.jpg) + +![](images/3a2b3755eb42b41a0acff9796582f49b9b2c2ed4be440ab2324bf92e9acbe632.jpg) + +![](images/10fea1934f6969aadadede7a6ddd154219bd249e3ce95ff9ae217357dcdf8176.jpg) + +![](images/2b679a8cea3a15a3c97b024faf88c9058f9beb6a67503a54006396d3f1df4493.jpg) + +![](images/35d92a5065e7fab66d9ceff326327f247689e2a2a3bda88d9a1abc581abb6b90.jpg) + +![](images/769b5e1ede1092f1e438f61c07cd10464e10f3b1f0e7eef4bc6cf34edc794e6c.jpg) + +![](images/c9c5b387eb2c130e756bdbd7db836661f53b157f626de395b7b534c09239c191.jpg) + +![](images/7757c70efcf02b29423b5b11cc9404efa386bed4c4669bd9232cec8f1662ad3e.jpg) + +![](images/908a6abf2c26890f3d49a9218ad4d37f8f0052eace076c7bc55dcd0c2c1432ae.jpg) + +![](images/8658f3bab158c42ebeeccf6e2af5555e3893ea7a50ce4e84cb67e557e22c669c.jpg) + +![](images/7c5e0aa831d329eb4b868b784e68ee579e40dc361c64a681222c12b2a137fd17.jpg) + +![](images/3c46eba02a684bc61352ae59f0bc9a16ea935b9630ae83eec2e649ebf18108f3.jpg) + +![](images/555bb4e27fba2d232a2805a3a082b3e16239e34a152c05333288330ab756e05a.jpg) + +![](images/faf055061289e2ecf4dc55c71bee325efa4aa0cc077d1cfdc226dcc5cce3dc40.jpg) + +![](images/73eb4d86bf6210b526e84adc4866315d8619cb8fd736786b0599999bac71d2ba.jpg) + +![](images/b1363cdfe96043b53776e290ac9806c3c3e4c8161a88a2d4e6fffc9d752f5e8d.jpg) + +![](images/41eb44a1fece0cb325751b315f66e6df81a159935dbb8e992ef8cc31a615532c.jpg) + +![](images/a4f6e083289fd0eb2362c321cc886783b65f9be91fb3597091706c25557074d3.jpg) + +![](images/315aa0ee30ea60305eec8fa6c20cb543b7c475d6ce4edab7e91ad8605a8088c9.jpg) + +![](images/355a408717fff2ed53d795d62c02826bb8c883709ff4b3a3bb3d73460e8c6922.jpg) + +![](images/016646018d496cfeba66440e261c90b6eb0603decc55a162c8b4df3b32e93c10.jpg) + +![](images/f7a93f11cee7e8cd7bfd10106c0a93cde716920412927efe16f3d2e40a3c810a.jpg) + +![](images/485b96d45076cad1b8cb3ee48625d34aeefa690b91c60238d47c970e64461649.jpg) + +![](images/d23d9d646b92ae00ebab5bc5e73e01bc68baafe93b18c3dcd3de7be0743e99e4.jpg) + +![](images/51718dc975cc7dd76aefa74ac903e774ece973e27aedc464224f0020711ddfc4.jpg) + +![](images/d5d04a97f22b88fee6c1a7707c7ccea8040a7ce58e6f3ba772577632cd01ae26.jpg) + +![](images/a9d4f6e003c515133d8cacb11128c65d69b1d2c98ec08985900f2e2748d6871b.jpg) + +![](images/3bbdecfce092cc2e75b25de41a2eeee6726dda50fdb7e94046698a5e1291c277.jpg) \ No newline at end of file diff --git a/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/images.zip b/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e1c3196ef25d1e02dbfa2e63a1f0da23bc1c06a9 --- /dev/null +++ b/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a25c51abd042b11d0c48d45b2f360f5f0299a14f4b044e60529e354d7aa21b2c +size 1152116 diff --git a/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/layout.json b/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..86a44ac6696eebf56be982d38a2c203a8bcedc5c --- /dev/null +++ b/betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9ef5b8fdfcdc5c18545ab69ef8fbcfa654550e032a19fb36d788c34d6bdd3572 +size 1642158 diff --git a/bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_content_list.json b/bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b44cb477b217d4e765296c4efcc855324fc723d5 --- /dev/null +++ b/bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b8a56f5291e7616c7e8699cdf8458827b9f8de7cca79d9195c5c3f94d8d90c0 +size 225068 diff --git a/bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_model.json b/bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_model.json new file mode 100644 index 0000000000000000000000000000000000000000..48309aae6a96d3086cd5b759573b5e60f1b8dfc4 --- /dev/null +++ b/bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd6117484900e957d2194a4bb1148c1edcb581ba53224786fa6bcba2dd9b1c58 +size 258683 diff --git a/bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_origin.pdf b/bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d8453dca517e646892b4977726ee47a3d9ed99b0 --- /dev/null +++ b/bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c63992aab215d14d33a0ed7b22a48e9d68cd80a177723e477f61ef8a36793c34 +size 7797691 diff --git a/bootstrappedmetalearning/full.md b/bootstrappedmetalearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6c37419ce6110d243470b5980cbd976dbab6ce60 --- /dev/null +++ b/bootstrappedmetalearning/full.md @@ -0,0 +1,795 @@ +# BOOTSTRAPPED META-LEARNING + +Sebastian Flennerhag + +DeepMind + +flennerhag@google.com + +Yannick Schroecker + +DeepMind + +Tom Zahavy + +DeepMind + +Hado van Hasselt + +DeepMind + +David Silver + +DeepMind + +Satinder Singh + +DeepMind + +# ABSTRACT + +Meta-learning empowers artificial intelligence to increase its efficiency by learning how to learn. Unlocking this potential involves overcoming a challenging meta-optimisation problem. We propose an algorithm that tackles this problem by letting the meta-learner teach itself. The algorithm first bootstraps a target from the meta-learner, then optimises the meta-learner by minimising the distance to that target under a chosen (pseudo-)metric. Focusing on meta-learning with gradients, we establish conditions that guarantee performance improvements and show that the metric can control meta-optimisation. Meanwhile, the bootstrapping mechanism can extend the effective meta-learning horizon without requiring backpropagation through all updates. We achieve a new state-of-the-art for model-free agents on the Atari ALE benchmark and demonstrate that it yields both performance and efficiency gains in multi-task meta-learning. Finally, we explore how bootstrapping opens up new possibilities and find that it can meta-learn efficient exploration in an $\varepsilon$ -greedy $Q$ -learning agent—without backpropagating through the update rule. + +# 1 INTRODUCTION + +In a standard machine learning problem, a learner or agent learns a task by iteratively adjusting its parameters under a given update rule, such as Stochastic Gradient Descent (SGD). Typically, the learner's update rule must be tuned manually. In contrast, humans learn seamlessly by relying on previous experiences to inform their learning processes (Spelke & Kinzler, 2007). + +For a (machine) learner to have the same capability, it must be able to learn its update rule (or such inductive biases). Meta-learning is one approach that learns (parts of) an update rule by applying it for some number of steps and then evaluating the resulting performance (Schmidhuber, 1987; Hinton & Plaut, 1987; Bengio et al., 1991). For instance, a well-studied and often successful approach is to tune parameters of a gradient-based update, either online during training on a single task (Bengio, 2000; Maclaurin et al., 2015; Xu et al., 2018; Zahavy et al., 2020), or meta-learned over a distribution of tasks (Finn et al., 2017; Rusu et al., 2019; Flennerhag et al., 2020; Jerfel et al., 2019; Denevi et al., 2019). More generally, the update rule can be an arbitrary parameterised function (Hochreiter et al., 2001; Andrychowicz et al., 2016; Kirsch et al., 2019; Oh et al., 2020), or the function itself can be meta-learned jointly with its parameters (Alet et al., 2020; Real et al., 2020). + +Meta-learning is challenging because to evaluate an update rule, it must first be applied. This often leads to high computational costs. As a result most works optimise performance after $K$ applications of the update rule and assume that this yields improved performance for the remainder of the learner's lifetime (Bengio et al., 1991; Maclaurin et al., 2015; Metz et al., 2019). When this assumption fails, meta-learning suffers from a short-horizon bias (Wu et al., 2018; Metz et al., 2019). Similarly, optimizing the learner's performance after $K$ updates can fail to account for the process of learning, causing another form of myopia (Flennerhag et al., 2019; Stadie et al., 2018; Chen et al., 2016; Cao et al., 2019). Challenges in meta-optimisation have been observed to cause degraded lifetime performance (Lv et al., 2017; Wichrowska et al., 2017), collapsed exploration (Stadie et al., 2018; Chen et al., 2016), biased learner updates (Stadie et al., 2018; Zheng et al., 2018), and poor generalisation performance (Wu et al., 2018; Yin et al., 2020; Triantafillou et al., 2020). + +We argue that defining the meta-learner's objective directly in terms of the learner's objective—i.e. the performance after $K$ update steps—creates two bottlenecks in meta-optimisation. The first bottleneck is curvature: the meta-objective is constrained to the same type of geometry as the learner; the second is myopia: the meta-objective is fundamentally limited to evaluating performance within the $K$ -step horizon, but ignores future learning dynamics. Our goal is to design an algorithm that removes these. + +The algorithm relies on two main ideas. First, to mitigate myopia, we introduce the notion of bootstrapping a target from the meta-learner itself, a meta-bootstrap, that infuses information about learning dynamics in the objective. Second, to control curvature, we formulate the meta-objective in terms of minimising distance (or divergence) to the bootstrapped target, thereby controlling the meta-loss landscape. In this way, the meta-learner learns from its future self. This leads to a bootstrapping effect where improvements beget further improvements. We present a detailed formulation in Section 3; on a high level, as in previous works, we first unroll the meta-learned update rule for $K$ steps to obtain the learner's new parameters. Whereas standard meta-objectives optimise the update rule with respect to (w.r.t.) the learner's performance under the new parameters, our proposed algorithm constructs the meta-objective in two steps: + +1. It bootstraps a target from the learner's new parameters. In this paper, we generate targets by continuing to update the learner's parameters—either under the meta-learned update rule or another update rule—for some number of steps. +2. The learner's new parameters—which are a function of the meta-learner's parameters—and the target are projected onto a matching space. A simple example is Euclidean parameter space. To control curvature, we may choose a different (pseudo-)metric space. For instance, a common choice under probabilistic models is the Kullback-Leibler (KL) divergence. + +The meta-learner is optimised by minimising distance to the bootstrapped target. We focus on gradient-based optimisation, but other optimisation routines are equally applicable. By optimising meta-parameters in a well-behaved space, we can drastically reduce ill-conditioning and other phenomena that disrupt meta-optimisation. In particular, this form of Bootstrapped Meta-Gradient (BMG) enables us to infuse information about future learning dynamics without increasing the number of update steps to backpropagate through. In effect, the meta-learner becomes its own teacher. We show that BMG can guarantee performance improvements (Theorem 1) and that this guarantee can be stronger than under standard meta-gradients (Corollary 1). Empirically, we find that BMG provides substantial performance improvements over standard meta-gradients in various settings. We obtain a new state-of-the-art result for model-free agents on Atari (Section 5.2) and improve upon MAML (Finn et al., 2017) in the few-shot setting (Section 6). Finally, we demonstrate how BMG enables new forms of meta-learning, exemplified by meta-learning $\varepsilon$ -greedy exploration (Section 5.1). + +# 2 RELATED WORK + +Bootstrapping as used here stems from temporal difference (TD) algorithms in reinforcement learning (RL) (Sutton, 1988). In these algorithms, an agent learns a value function by using its own future predictions as targets. Bootstrapping has recently been introduced in the self-supervised setting (Guo et al., 2020; Grill et al., 2020). In this paper, we introduce the idea of bootstrapping in the context of meta-learning, where a meta-learner learns about an update rule by generating future targets from it. + +Our approach to target matching is related to methods in multi-task meta-learning (Flennerhag et al., 2019; Nichol et al., 2018) that meta-learn an initialisation for SGD by minimising the Euclidean distance to task-optimal parameters. BMG generalise this concept by allowing for arbitrary metaparameters, matching functions, and target bootstraps. It is further related the more general concept of self-referential meta-learning (Schmidhuber, 1987; 1993), where the meta-learned update rule is used to optimise its own meta-objective. + +Target matching under KL divergences results in a form of distillation (Hinton et al., 2015), where an online network (student) is encouraged to match a target network (teacher). In a typical setup, the target is either a fixed (set of) expert(s) (Hinton et al., 2015; Rusu et al., 2015) or a moving aggregation of current experts (Teh et al., 2017; Grill et al., 2020), whereas BMG bootstraps a target by following an update rule. Finally, BMG is loosely inspired by trust-region methods that introduce a distance function to regularize gradient updates (Pascanu & Bengio, 2014; Schulman et al., 2015; Tomar et al., 2020; Hessel et al., 2021). + +# 3 BOOTSTRAPPED META-GRADIENTS + +We begin in the single-task setting and turn to multi-task meta-learning in Section 6. The learner's problem is to minimize a stochastic objective $f(\mathbf{x}) \coloneqq \mathbb{E}[\ell (\mathbf{x};\zeta)]$ over a data distribution $p(\zeta)$ , where $\zeta$ denotes a source of data and $\mathbf{x} \in \mathcal{X} \subset \mathbb{R}^{n_x}$ denotes the learner's parameters. In RL, $f$ is typically the (negative) expected value of a policy $\pi_{\mathbf{x}}$ ; in supervised learning, $f$ may be the expected negative log-likelihood under a probabilistic model $\pi_{\mathbf{x}}$ . We provide precise formulations in Sections 5 and 6. + +The meta-learner's problem is to learn an update rule $\varphi : \mathcal{X} \times \mathcal{H} \times \mathcal{W} \to \mathcal{X}$ that updates the learner's parameters by $\mathbf{x}^{(1)} = \mathbf{x} + \varphi(\mathbf{x}, \mathbf{h}, \mathbf{w})$ given $\mathbf{x} \in \mathcal{X}$ , a learning state $\mathbf{h} \in \mathcal{H}$ , and meta-parameters $\mathbf{w} \in \mathcal{W} \subset \mathbb{R}^{n_w}$ of the update rule. We make no assumptions on the update rule other than differentiability in $\mathbf{w}$ . As such, $\varphi$ can be a recurrent neural network (Hochreiter et al., 2001; Wang et al., 2016; Andrychowicz et al., 2016) or gradient descent (Bengio, 2000; Maclaurin et al., 2015; Finn et al., 2017). The learning state $\mathbf{h}$ contains any other data required to compute the update; in a black-box setting $\mathbf{h}$ contains an observation and the recurrent state of the network; for gradient-based updates, $\mathbf{h}$ contains the (estimated) gradient of $f$ at $\mathbf{x}$ along with any auxiliary information; for instance, SGD is given by $\mathbf{x}^{(1)} = \mathbf{x} - \alpha \nabla_x f(\mathbf{x})$ with $\mathbf{h} = \nabla_x f(\mathbf{x})$ , $\mathbf{w} = \alpha \in \mathbb{R}_+$ . + +![](images/821c297219bac1cee542f9905c0908869d8bada929c24c09e1015600a7285a58.jpg) +Figure 1: Bootstrapped Meta-Gradients. + +The standard meta-gradient (MG) optimises meta-parameters $\mathbf{w}$ by taking $K$ steps under $\varphi$ and evaluating the resulting learner parameter vector under $f$ . With a slight abuse of notation, let $\mathbf{x}^{(K)}(\mathbf{w})$ denote the learner's parameters after $K$ applications of $\varphi$ starting from some $(\mathbf{x},\mathbf{h},\mathbf{w})$ , where $(\mathbf{x},\mathbf{h})$ evolve according to $\varphi$ and the underlying data distribution. The MG update is defined by + +$$ +\mathbf {w} ^ {\prime} = \mathbf {w} - \beta \nabla_ {\mathbf {w}} f (\mathbf {x} ^ {(K)} (\mathbf {w})), \quad \beta \in \mathbb {R} _ {+}. \tag {1} +$$ + +Extensions involve averaging the performance over all iterates $\mathbf{x}^{(1)},\ldots ,\mathbf{x}^{(K)}$ (Andrychowicz et al., 2016; Chen et al., 2016; Antoniou et al., 2019) or using validation data in the meta-objective (Bengio et al., 1991; Maclaurin et al., 2015; Finn et al., 2017; Xu et al., 2018). We observe two bottlenecks in the meta-objective in Eq. 1. First, the meta-objective is subject to the same curvature as the learner. Thus if $f$ is ill-conditioned, so will the meta-objective be. Second, the meta-objective is only able to evaluate the meta-learner on dynamics up to the $K$ th step, but ignores effects of future updates. + +To tackle myopia, we introduce a Target Bootstrap (TB) $\xi : \mathcal{X} \mapsto \mathcal{X}$ that maps the meta-learner's output $\mathbf{x}^{(K)}$ into a bootstrapped target $\tilde{\mathbf{x}} = \xi (\mathbf{x}^{(K)})$ . We focus on TBs that unroll $\varphi$ a further $L - 1$ steps before taking final gradient step on $f$ , with targets of the form $\tilde{\mathbf{x}} = \mathbf{x}^{(K + L - 1)} - \alpha \nabla f(\mathbf{x}^{(K + L - 1)})$ . This TB encourages the meta-learner to reach future states on its trajectory faster while nudging the trajectory in a descent direction. Crucially, regardless of the bootstrapping strategy, we do not backpropagate through the target. Akin to temporal difference learning in RL (Sutton, 1988), the target is a fixed goal that the meta-learner should try to produce within the $K$ -step budget. + +Finally, to improve the meta-optimisation landscape, we introduce a matching function $\mu : \mathcal{X} \times \mathcal{X} \to \mathbb{R}_+$ that measures the (dis)similarity between the meta-learner's output, $\mathbf{x}^{(K)}(\mathbf{w})$ , and the target, $\tilde{\mathbf{x}}$ , in a matching space defined by $\mu$ (see Figure 1). Taken together, the BMG update is defined by + +$$ +\tilde {\mathbf {w}} = \mathbf {w} - \beta \nabla_ {\mathbf {w}} \mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)} (\mathbf {w})), \quad \beta \in \mathbb {R} _ {+}, \tag {2} +$$ + +where the gradient is with respect to the second argument of $\mu$ . Thus, BMG describes a family of algorithms based on the choice of matching function $\mu$ and TB $\xi$ . In particular, MG is a special case of BMG under matching function $\mu (\tilde{\mathbf{x}},\mathbf{x}^{(K)}) = \| \tilde{\mathbf{x}} -\mathbf{x}^{(K)}\| _2^2$ and TB $\xi (\mathbf{x}^{(K)}) = \mathbf{x}^{(K)} - \frac{1}{2}\nabla_{x}f(\mathbf{x}^{(K)})$ , since the bootstrapped meta-gradient reduces to the standard meta-gradient: + +$$ +\left. \nabla_ {w} \left\| \tilde {\mathbf {x}} - \mathbf {x} ^ {(K)} (\mathbf {w}) \right\| _ {2} ^ {2} = - 2 D \left(\tilde {\mathbf {x}} - \mathbf {x} ^ {(K)}\right) = D \nabla_ {x} f \left(\mathbf {x} ^ {(K)}\right) = \nabla_ {w} f \left(\mathbf {x} ^ {(K)} (\mathbf {w})\right), \right. \tag {3} +$$ + +where $D$ denotes the (transposed) Jacobian of $\mathbf{x}^{(K)}(\mathbf{w})$ . For other matching functions and target strategies, BMG produces different meta-updates compared to MG. We discuss these choices below. + +Matching Function Of primary concern to us are models that output a probabilistic distribution, $\pi_{\mathbf{x}}$ . A common pseudo-metric over a space of probability distributions is the Kullback-Leibler (KL) divergence. For instance, Natural Gradients (Amari, 1998) point in the direction of steepest descent under the KL-divergence, often approximated through a KL-regularization term (Pascanu & Bengio, 2014). KL-divergences also arise naturally in RL algorithms (Kakade, 2001; Schulman et al., 2015; Abdolmaleki et al., 2018). Hence, a natural starting point is to consider KL-divergences between the target and the iterate, e.g. $\mu (\tilde{\mathbf{x}},\mathbf{x}^{(K)}) = \mathrm{KL}\left(\pi_{\tilde{\mathbf{x}}}\parallel \pi_{\mathbf{x}^{(K)}}\right)$ . In actor-critic algorithms (Sutton et al., 1999), the policy defines only part of the agent—the value function defines the other. Thus, we also consider a composite matching function over both policy and value function. + +Target Bootstrap We analyze conditions under which BMG guarantees performance improvements in Section 4 and find that the target should co-align with the gradient direction. Thus, in this paper we focus on gradient-based TBs and find that they perform well empirically. As with matching functions, this is a small subset of all possible choices; we leave the exploration of other choices for future work. + +# 4 PERFORMANCE GUARANTEES + +In this analysis, we restrict attention to the noise-less setting (true expectations). In this setting, we ask three questions: (1) what local performance guarantees are provided by MG? (2) What performance guarantees can BMG provide? (3) How do these guarantees relate to each other? To answer these questions, we analyse how the performance around $f(\mathbf{x}^{(K)}(\mathbf{w}))$ changes by updating $\mathbf{w}$ either under standard meta-gradients (Eq. 1) or bootstrapped meta-gradients (Eq. 2). + +First, consider improvements under the MG update. In online optimisation, the MG update can achieve strong convergence guarantees if the problem is well-behaved (van Erven & Koolen, 2016), with similar guarantees in the multi-task setting (Balcan et al., 2019; Khodak et al., 2019; Denevi et al., 2019). A central component of these results is that the MG update guarantees a local improvement in the objective. Lemma 1 below presents this result in our setting, with the following notation: let $\| \mathbf{u}\| _A\coloneqq \sqrt{\langle\mathbf{u},A\mathbf{u}\rangle}$ for any square real matrix $A$ . Let $G^{T} = D^{T}D\in \mathbb{R}^{n_{x}\times n_{x}}$ , with $D\coloneqq \left[\frac{\partial}{\partial\mathbf{w}}\mathbf{x}^{(K)}(\mathbf{w})\right]^T\in \mathbb{R}^{n_w\times n_x}$ . Note that $\nabla_wf(\mathbf{x}^{(K)}(\mathbf{w})) = D\nabla_xf(\mathbf{x}^{(K)})$ + +Lemma 1 (MG Descent). Let $\mathbf{w}'$ be given by Eq. 1. For $\beta$ sufficiently small, $f\big(\mathbf{x}^{(K)}(\mathbf{w}')\big) - f\big(\mathbf{x}^{(K)}(\mathbf{w})\big) = -\beta \| \nabla_x f(\mathbf{x}^{(K)})\|_{G^T}^2 + O(\beta^2) < 0$ . + +We defer all proofs to Appendix A. Lemma 1 relates the gains obtained under standard meta-gradients to the local gradient norm of the objective. Because the meta-objective is given by $f$ , the MG update is not scale-free (c.f. Schraudolph, 1999), nor invariant to re-parameterisation. If $f$ is highly non-linear, the meta-gradient can vary widely, preventing efficient performance improvement. Next, we turn to BMG, where we assume $\mu$ is differentiable and convex, with 0 being its minimum. + +Theorem 1 (BMG Descent). Let $\tilde{\mathbf{w}}$ be given by Eq. 2 for some $TB\xi$ . The BMG update satisfies + +$$ +f \big (\mathbf {x} ^ {(K)} (\tilde {\mathbf {w}}) \big) - f \big (\mathbf {x} ^ {(K)} (\mathbf {w}) \big) = \frac {\beta}{\alpha} \left(\mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)} - \alpha G ^ {T} \mathbf {g}) - \mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)})\right) + o (\beta (\alpha + \beta)). +$$ + +For $(\alpha, \beta)$ sufficiently small, there exists infinitely many $\xi$ for which $f\big(\mathbf{x}^{(K)}(\tilde{\mathbf{w}})\big) - f\big(\mathbf{x}^{(K)}(\mathbf{w})\big) < 0$ . In particular, $\xi(\mathbf{x}^{(K)}) = \mathbf{x}^{(K)} - \alpha G^T \mathbf{g}$ yields improvements + +$$ +f \big (\mathbf {x} ^ {(K)} (\tilde {\mathbf {w}}) \big) - f \big (\mathbf {x} ^ {(K)} (\mathbf {w}) \big) = - \frac {\beta}{\alpha} \mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)}) + o (\beta (\alpha + \beta)) < 0. +$$ + +This is not an optimal rate; there exists infinitely many TBs that yield greater improvements. + +Theorem 1 portrays the inherent trade-off in BMG; targets should align with the local direction of steepest descent, but provide as much learning signal as possible. Importantly, this theorem also establishes that $\mu$ directly controls for curvature as improvements are expressed in terms of $\mu$ . While the TB $\xi_G^\alpha (\mathbf{x}^{(K)})\coloneqq \mathbf{x}^{(K)} - \alpha G^T\mathbf{g}$ yields performance improvements that are proportional to the meta-loss itself, larger improvements are possible by choosing a TB that carries greater learning signal (by increasing $\mu (\tilde{\mathbf{x}},\mathbf{x}^{(K)}))$ ). To demonstrate that BMG can guarantee larger improvements to the update rule than MG, we consider the TB $\xi_G^\alpha$ with $\mu$ the (squared) Euclidean norm. Let $r\coloneqq \| \nabla f(\mathbf{x}^{(K)})\| _2 / \| G^T\nabla f(\mathbf{x}^{(K)})\| _2$ denote the gradient norm ratio. + +Corollary 1. Let $\mu = \| \cdot \| _2^2$ and $\tilde{\mathbf{x}} = \xi_G^r (\mathbf{x}^{(K)})$ . Let $\mathbf{w}'$ be given by Eq. 1 and $\tilde{\mathbf{w}}$ be given by Eq. 2. For $\beta$ sufficiently small, $f\big(\mathbf{x}^{(K)}(\tilde{\mathbf{w}})\big)\leq f\big(\mathbf{x}^{(K)}(\mathbf{w}')\big)$ , strictly if $GG^{T}\neq G^{T}$ and $G^{T}\nabla_{x}f(\mathbf{x}^{(K)}) = \mathbf{0}$ . + +Discussion Our analysis focuses on an arbitrary (albeit noiseless) objective $f$ and establishes that BMG can guarantee improved performance under a variety of TBs. We further show that BMG can yield larger local improvements than MG. To identify optimal TBs, further assumptions are required on $f$ and $\mu$ , but given these Theorem 1 can serve as a starting point for more specialised analysis. Empirically, we find that taking $L$ steps on the meta-learned update with an final gradient step on the objective performs well. Theorem 1 exposes a trade-off for targets that are "far" away. Empirically, we observe clear benefits from bootstraps that unroll the meta-learner for several steps before taking a gradient step on $f$ ; exploring other forms of bootstraps is an exciting area for future research. + +# 5 REINFORCEMENT LEARNING + +We consider a typical reinforcement learning problem, modelled as an MDP $\mathcal{M} = (\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)$ . Given an initial state $\mathbf{s}_0\in S$ , at each time step $t\in \mathbb{N}$ , the agent takes an action $\mathbf{a}_t\sim \pi_{\mathbf{x}}(\mathbf{a}\mid \mathbf{s}_t)$ from a policy $\pi :S\times \mathcal{A}\to [0,1]$ parameterised by $\mathbf{x}$ . The agent obtains a reward $r_{t + 1}\sim \mathcal{R}(\mathbf{s}_t,\mathbf{a}_t,\mathbf{s}_{t + 1})$ based on the transition $\mathbf{s}_{t + 1}\sim \mathcal{P}(\mathbf{s}_{t + 1}\mid \mathbf{s}_t,\mathbf{a}_t)$ . The action-value of the agent's policy given a state $\mathbf{s}_0$ and action $\mathbf{a}_0$ is given by $Q_{\mathbf{x}}(\mathbf{s}_0,\mathbf{a}_0)\coloneqq \mathbb{E}[\sum_{t = 0}^{\infty}\gamma^{t}r_{t + 1}\mid \mathbf{s}_0,\mathbf{a}_0,\pi_{\mathbf{x}}]$ under discount rate $\gamma \in [0,1)$ . The corresponding value of policy $\pi_{\mathbf{x}}$ is given by $V_{\mathbf{x}}(\mathbf{s}_0)\coloneqq \mathbb{E}_{\mathbf{a}_0\sim \pi_{\mathbf{x}}(\mathbf{a}\mid \mathbf{s}_0)}[Q_{\mathbf{x}}(\mathbf{s}_0,\mathbf{a}_0)]$ . + +The agent's problem is to learn a policy that maximises the value given an expectation over $\mathbf{s}_0$ , defined either by an initial state distribution in the episodic setting (e.g. Atari, Section 5.2) or the stationary state-visitation distribution under the policy in the non-episodic setting (Section 5.1). Central to RL is the notion of policy-improvement, which takes a current policy $\pi_{\mathbf{x}}$ and constructs a new policy $\pi_{\mathbf{x}'}$ such that $\mathbb{E}[V_{\mathbf{x}'}] \geq \mathbb{E}[V_{\mathbf{x}}]$ . A common policy-improvement step is arg $\max_{\mathbf{x}'} \mathbb{E}_{a \sim \pi_{\mathbf{x}'}(a|s)}[Q_{\mathbf{x}}(s,a)]$ . + +Most works in meta-RL rely on actor-critic algorithms (Sutton et al., 1999). These treat the above policy-improvement step as an optimisation problem and estimate a policy-gradient (Williams & Peng, 1991; Sutton et al., 1999) to optimise $\mathbf{x}$ . To estimate $V_{\mathbf{x}}$ , these introduce a critic $v_{\mathbf{z}}$ that is jointly trained with the policy. The policy is optimised under the current estimate of its value function, while the critic is tracking the value function by minimizing a Temporal-Difference (TD) error. Given a rollout $\tau = (\mathbf{s}_0, \mathbf{a}_0, r_1, \mathbf{s}_1, \dots, r_T, \mathbf{s}_T)$ , the objective is given by $f(\mathbf{x}, \mathbf{z}) = \epsilon_{\mathrm{PG}} \ell_{\mathrm{PG}}(\mathbf{x}) + \epsilon_{\mathrm{EN}} \ell_{\mathrm{EN}}(\mathbf{x}) + \epsilon_{\mathrm{TD}} \ell_{\mathrm{TD}}(\mathbf{z})$ , $\epsilon_{\mathrm{PG}}, \epsilon_{\mathrm{EN}}, \epsilon_{\mathrm{TD}} \in \mathbb{R}_+$ , where + +$$ +\ell_ {\mathrm {E N}} (\mathbf {x}) = \sum_ {t \in \tau} \sum_ {\mathbf {a} \in \mathcal {A}} \pi_ {\mathbf {x}} (\mathbf {a} \mid \mathbf {s} _ {t}) \log \pi_ {\mathbf {x}} (\mathbf {a} \mid \mathbf {s} _ {t}), \quad \ell_ {\mathrm {T D}} (\mathbf {z}) = \frac {1}{2} \sum_ {t \in \tau} \left(G _ {t} ^ {(n)} - v _ {\mathbf {z}} (\mathbf {s} _ {t})\right) ^ {2}, \tag {4} +$$ + +$$ +\ell_ {\mathrm {P G}} (\mathbf {x}) = - \sum_ {t \in \tau} \rho_ {t} \log \pi_ {\mathbf {x}} (\mathbf {a} _ {t} \mid \mathbf {s} _ {t}) \left(G _ {t} ^ {(n)} - v _ {\mathbf {z}} (\mathbf {s} _ {t})\right), +$$ + +where $\rho_{t}$ denotes an importance weight and $G_{t}^{(n)}$ denotes an $n$ -step bootstrap target. Its form depends on the algorithm; in Section 5.1, we generate rollouts from $\pi_{\mathbf{x}}$ (on-policy), in which case $\rho_{t} = 1$ and $G_{t}^{(n)} = \sum_{i=0}^{(n-1)} \gamma^{i} r_{t+i+1} + \gamma^{n} v_{\overline{\mathbf{z}}}(\mathbf{s}_{t+n}) \forall t$ , where $\overline{\mathbf{z}}$ denotes fixed (non-differentiable) parameters. In the off-policy setting (Section 5.2), $\rho$ corrects for sampling bias and $G_{t}^{(n)}$ is similarly adjusted. + +![](images/ebaa86e8957b62333f05cbe4beba2d3b32d5a3f491de0d9064fb6999c5de5048.jpg) +Figure 2: Non-stationary grid-world (Section 5.1). Left: Comparison of total returns under an actor-critic agent over 50 seeds. Right: Learned entropy-regularization schedules. The figure depicts the average regularization weight $(\epsilon)$ over 4 task-cycles at 6M steps in the environment. + +![](images/fbfb588978466e78e7c0e379bb98afc1ac1b5972cdc4e8ed4f3de2eb68958d41.jpg) + +# 5.1 A NON-STATIONARY AND NON-EPISODIC GRID WORLD + +We begin with a tabular grid-world with two items to collect. Once an item is collected, it is randomly re-spawned. One item yields a reward of $+1$ and the other a reward of $-1$ . The reward is flipped every 100,000 steps. To succeed, a memory-less agent must efficiently re-exlore the environment. We study an on-policy actor-critic agent with $\epsilon_{\mathrm{PG}} = \epsilon_{\mathrm{TD}} = 1$ . As baseline, we tune a fixed entropy-rate weight $\epsilon = \epsilon_{\mathrm{EN}}$ . We compare against agents that meta-learn $\epsilon$ online. For MG, we use the actor-critic loss as meta-objective ( $\epsilon$ fixed), as per Eq. 1. The setup is described in full in Appendix B.1 + +BMG Our primary focus is on the effect of bootstrapping. Because this setup is fully online, we can generate targets using the most recent $L - 1$ parameter updates and a final agent parameter update using $\epsilon = 0$ . Hence, the computational complexity + +![](images/68c3f933fe3ac221f125b7ccd7b86a72fb28a9a39f3bad012f62b63f4e79e42c.jpg) +Figure 3: BMG $\varepsilon$ -greedy exploration under a $Q(\lambda)$ -agent. + +of $\mathrm{BMG}$ is constant in $L$ under this implementation (see Appendix B.2). We define the matching function as the KL-divergence between $\mathbf{x}^{(K)}$ and the target, $\mu (\tilde{\mathbf{x}},\mathbf{x}^{(K)}(\mathbf{w})) = \mathrm{KL}\left(\pi_{\tilde{\mathbf{x}}}\parallel \pi_{\mathbf{x}^{(K)}}\right)$ . + +Figure 2 presents our main findings. Both MG and BMG learn adaptive entropy-rate schedules that outperform the baseline. However, MG fails if $\epsilon = 0$ in the meta-objective, as it becomes overly greedy (Figure 9). MG shows no clear benefit of longer meta-learning horizons, indicating that myopia stems from the objective itself. In contrast, BMG exhibits greater adaptive capacity and is able to utilise greater meta-learning horizons. Too short horizons induce myopia, whereas too long prevent efficient adaptation. For a given horizon, increasing $K$ is uniformly beneficial. Finally, we find that BMG outperforms MG for a given horizon without backpropagating through all updates. For instance, for $K = 8$ , BMG outperforms MG with $K = 1$ and $L = 7$ . Our ablation studies (Appendix B.2) show that increasing the target bootstrap length counters myopia; however, using the meta-learned update rule for all $L$ steps can derail meta-optimization. + +Next, we consider a new form of meta-learning: learning $\varepsilon$ -greedy exploration in a $Q(\lambda)$ -agent (precise formulation in Appendix B.3). While the $\varepsilon$ parameter has a similar effect to entropy regularization, $\varepsilon$ is a parameter applied in the behaviour-policy while acting. As it does not feature in the loss function, it is not readily optimized by existing meta-gradient approaches. In contrast, BMG can be implemented by matching the policy derived from a target action-value function, precisely as in the actor-critic case. An implication is that BMG can meta-learn without backpropagating through the update rule. Significantly, this opens up to meta-learning (parts of) the behaviour policy, which is hard to achieve in the MG setup as the behaviour policy is not used in the update rule. Figure 3 shows that meta-learning $\varepsilon$ -greedy exploration in this environment significantly outperforms the best fixed $\varepsilon$ found by hyper-parameter tuning. As in the actor-critic case, we find that BMG responds positively to longer meta-learning horizons (larger $L$ ); see Appendix B.3, Figure 12 for detailed results. + +![](images/ff2be6a30634513bbd7c8c7003f617eee9c079ac9ea78f1718dcedef6c8bd729.jpg) +Figure 4: Human-normalized score across the 57 games in Atari ALE. Left: per-game difference in score between BMG and our implementation of STACX* at 200M frames. Right: Median scores over learning compared to published baselines. Shading depict standard deviation across 3 seeds. + +![](images/22438a909083cc3885fc30f57a35971a9528b0182eaf97194c35017f295705c3.jpg) + +# 5.2 ATARI + +High-performing RL agents tend to rely on distributed learning systems to improve data efficiency (Kapturowski et al., 2018; Espeholt et al., 2018). This presents serious challenges for meta-learning as the policy gradient becomes noisy and volatile due to off-policy estimation (Xu et al., 2018; Zahavy et al., 2020). Theorem 1 suggests that BMG can be particularly effective in this setting under the appropriate distance function. To test these predictions, we adapt the Self-Tuning Actor-Critic (STACX; Zahavy et al., 2020) to meta-learn under BMG on the 57 environments in the Atari Arcade Learning Environment (ALE; Bellemare et al., 2013). + +Protocol We follow the original IMPALA setup (Espeholt et al., 2018), but we do not downsample or gray-scale inputs. Following the literature, we train for 200 million frames and evaluate agent performance by median Human Normalized Score (HNS) across 3 seeds (Espeholt et al., 2018; Xu et al., 2018; Zahavy et al., 2020). + +STACX The IMPALA actor-critic agent runs multiple actors asynchronously to generate experience for a centralized learner. The learner uses truncated importance sampling to correct for off-policy data in the actor-critic update, which adjusts $\rho$ and $\hat{V}$ in Eq. 4. The STACX agent (Zahavy et al., 2020) is a state-of-the-art meta-RL agent. It builds on IMPALA in two ways: (1) it introduces auxiliary tasks in the form of additional objectives that differ only in their hyper-parameters; (2) it meta-learns the hyper-parameters of each loss function (main and auxiliary). Meta-parameters are given by $\mathbf{w} = (\gamma^i,\epsilon_{\mathrm{PG}}^i,\epsilon_{\mathrm{EN}}^i,\epsilon_{\mathrm{TD}}^i,\lambda^i,\alpha^i)_{i = 1}^{1 + n}$ , where $\lambda$ and $\alpha$ are hyper-parameters of the importance weighting mechanism and $n = 2$ denotes the number of auxiliary tasks. STACX uses the IMPALA objective as the meta-objective with $K = 1$ . See Appendix C for a complete description. + +BMG We conduct ceteris-paribus comparisons that only alter the meta-objective: agent parameter updates are identical to those in STACX. When $L = 1$ , the target takes a gradient step on the original IMPALA loss, and hence the only difference is the form of the meta-objective; they both use the same data and gradient information. For $L > 1$ , the first $L - 1$ steps bootstrap from the meta-learned update rule itself. To avoid overfitting, each of the $L - 1$ steps use separate replay data; this extra data is not used anywhere else. To understand matching functions, we test policy matching and value matching. Policy matching is defined by $\mu(\tilde{\mathbf{x}}, \mathbf{x}^{(K)}(\mathbf{w})) = \mathrm{KL}(\pi_{\tilde{\mathbf{x}}} \parallel \pi_{\mathbf{x}^{(1)}})$ ; we also test a symmetric KL-divergence (KL-S). Value matching is defined by $\mu(\tilde{\mathbf{z}}, \mathbf{z}^{(1)}(\mathbf{w})) \coloneqq \mathbb{E}\big[(v_{\tilde{z}} - v_{z^{(1)}})^2\big]$ . + +Figure 4 presents our main comparison. BMG with $L = 1$ and policy-matching (KL) obtains a median HNS of $\sim 500\%$ , compared to $\sim 350\%$ for STACX. Recall that for $L = 1$ , BMG uses the same data to compute agent parameter update, target update, and matching loss; hence this is an apples-to-apples comparison. Using both policy matching and value matching (with 0.25 weight on the latter) further improves the score to $\sim 520\%$ and outperforms STACX across almost all 57 games, with a few minor exceptions (left panel, Figure 4). These results are obtained without tuning hyper-parameters for BMG. Finally, extending the meta-learning horizon by setting $L = 4$ and adjusting gradient clipping from .3 to .2 obtains a score of $\sim 610\%$ . + +![](images/5adaf9bd709990349e1af1f8d3749587bea1859aaf0acb86ed19374ce7af9449.jpg) +Figure 5: Ablations on Atari. Left: human normalized score decomposition of TB w.r.t. optimizer (SGD, RMS), matching function (L2, KL, KL & V), and bootstrap steps $(L)$ . BMG with (SGD, L2, $L = 1$ ) is equivalent to STACX. Center: episode return on Ms Pacman for different $L$ . Right: distribution of episode returns over all 57 games, normalized per-game by mean and standard deviation. All results are reported between 190-200M frames over 3 independent seeds. + +![](images/5ba47df26a2ce576c03cefc49a63298e368bf6bdbdf943f00b572e49b9294db4.jpg) + +![](images/efce7b91b0b12c12ce123ed54069bfb1f337a7f7dbe675ada5c2dffd066fc1b4.jpg) + +In Figure 5, we turn to ablations. In the left-panel, we deconstruct BMG into STACX (i.e., MG) and compare performances. We find that roughly $45\%$ of the performance gains comes from curvature correction (given by using RMSProp in the target bootstrap). The matching function can further control curvature to obtain performance improvements, accounting for roughly $25\%$ . Finally, increasing $L$ , thereby reducing myopia, accounts for about $30\%$ of the performance improvement. Comparing the cosine similarity between consecutive meta-gradients, we find that BMG improves upon STACX by two orders of magnitude. Detailed ablations in Appendix C.1. + +The center panel of Figure 5 provides a deep-dive in the effect of increasing the meta-learning horizon $(L > 1)$ in Ms Pacman. Performance is uniformly increasing in $L$ , providing further support that BMG can increase the effective meta-horizon without increasing the number of update steps to backpropagate through. A more in-depth analysis Appendix C.3 reveals that $K$ is more sensitive to curvature and the quality of data. However, bootstrapping only from the meta-learner for all $L$ steps can lead to degeneracy (Appendix C.2, Figure 14). In terms of replay (Appendix C.2), while standard MG degrades with more replay, BMG benefits from more replay in the target bootstrap. + +The right panel of Figure 5 studies the effect of the matching function. Overall, joint policy and value matching exhibits best performance. In contrast to recent work (Tomar et al., 2020; Hessel et al., 2021), we do not find that reversing the KL-direction is beneficial. Using only value-matching results in worse performance, as it does not optimise for efficient policy improvements. Finally, we conduct detailed analysis of scalability in Appendix C.4. While BMG is $20\%$ slower for $K = 1$ , $L = 1$ due to the target bootstrap, it is $200\%$ faster when MG uses $K = 4$ and BMG uses $K = 1$ , $L = 3$ . + +# 6 MULTI-TASK FEW-SHOT LEARNING + +Multi-task meta-learning introduces an expectation over task objectives. BMG is applied by computing task-specific bootstrap targets, with the meta-gradient being the expectation over task-specific matching losses. For a general multi-task formulation, see Appendix D; here we focus on the few-shot classification paradigm. Let $f_{\mathcal{D}}: \mathcal{X} \to \mathbb{R}$ denote the negative log-likelihood loss on some data $\mathcal{D}$ . A task is defined as a pair of datasets $(\mathcal{D}_{\tau}, \mathcal{D}_{\tau}')$ , where $\mathcal{D}_{\tau}$ is a training set and $\mathcal{D}_{\tau}'$ is a validation set. In the $M$ -shot- $N$ -way setting, each task has $N$ classes and $\mathcal{D}_{\tau}$ contains $M$ observations per class. + +The goal of this experiment is to study how the BMG objective behaves in the multi-task setting. For this purpose, we focus on the canonical MAML setup (Finn et al., 2017), which meta-learns an initialisation $\mathbf{x}_{\tau}^{(0)} = \mathbf{w}$ for SGD that is shared across a task distribution $p(\tau)$ . Adaptation is defined by $\mathbf{x}_{\tau}^{(k)} = \mathbf{x}_{\tau}^{(k - 1)} + \alpha \nabla f_{\mathcal{D}_{\tau}}(\mathbf{x}_{\tau}^{(k - 1)})$ , with $\alpha \in \mathbb{R}_+$ fixed. The meta-objective is the validation loss in expectation over the task distribution: $\mathbb{E}[f_{\mathcal{D}_{\tau}^{\prime}}(\mathbf{x}_{\tau}^{(K)}(\mathbf{w}))]$ . Several works have extended this setup by altering the update rule $(\varphi)$ (Lee & Choi, 2018; Zintgraf et al., 2019; Park & Oliva, 2019; Flennerhag et al., 2020). As our focus is on the meta-objective, we focus on comparisons with MAML. + +![](images/c979b25bc79b19cb9359236043fbe26e153fecdf2f59207ae0d2fe9d0b18412c.jpg) +Figure 6: MiniImagenet 5-way-5-shot meta-test performance. Left: performance as a function of meta-training batches. Center: performance as a function of wall-clock time. Right: best reported performance under each $K$ . Error bars depict standard deviation across 3 seeds. + +![](images/6e9c95475c022bca67e7e13f53adbd374aec6506c6cb4600a112897ecab23608.jpg) + +![](images/dc09a8dc14caa4f15b30241606dede105267defce6b114e36a813d8ee1d748f5.jpg) + +BMG For each task, a target $\tilde{\mathbf{x}}_{\tau}$ is bootstrapped by taking $L$ SGD steps from $\mathbf{x}_{\tau}^{(K)}$ using validation data. The BMG objective is the expected distance, $\mathbb{E}[\mu (\tilde{\mathbf{x}}_{\tau},\mathbf{x}_{\tau}^{(K)})]$ . The KL-divergence as matching function has an interesting connection to MG. The target $\tilde{\mathbf{x}}_{\tau}$ can be seen as an "expert" on task $\tau$ so that BMG is a form of distillation (Hinton et al., 2015). The log-likelihood loss used by MG is also a KL divergence, but w.r.t. a "cold" expert that places all mass on the true label. Raising the temperature in the target can allow BMG to transfer more information (Hinton & Plaut, 1987). + +Setup We use the MiniImagenet benchmark (Vinyals et al., 2016) and study two forms of efficiency: for data efficiency, we compare meta-test performance as function of the number of meta-training batches; for computational efficiency, we compare meta-test performance as a function of training time. To reflect what each method would achieve for a given computational budget, we report meta-test performance for the hyper-parameter configuration with best meta-validation performance. For MG, we tune the meta-learning rate $\beta \in \{10^{-3}, 10^{-4}\}$ , $K \in \{1, 5, 10\}$ , and options to use first-order approximations ((FOMAML; Finn et al., 2017) or (ANIL; Raghu et al., 2020)). For BMG, we tune $\beta \in \{10^{-3}, 10^{-4}\}$ , $K \in \{1, 5\}$ , as well as $L \in \{1, 5, 10\}$ , and the direction of the KL. + +The left panel of Figure 6 presents results on data efficiency. For few meta-updates, MG and BMG are on par. For 50 000 meta-updates and beyond, BMG achieves strictly superior performance, with the performance delta increasing over meta-updates. The central panel presents results on computational efficiency; we plot the time required to reach a given meta-test performance. This describes the relationship between performance and computational complexity. We find BMG exhibits better scaling properties, reaching the best performance of MG in approximately half the time. Finally, in the right panel, we study the effect of varying $K$ . BMG achieves higher performance for both $K = 1$ and $K = 5$ . We allow MG to also use $K = 10$ , but this did not yield any significant gains. We conduct an analysis of the impact BMG has on curvature and meta-gradient variance in Appendix D.3. To summarise, we find that BMG significantly improves upon the MG meta-objective, both in terms of data efficiency, computational efficiency, and final performance. + +# 7 CONCLUSION + +In this paper, we have put forth the notion that efficient meta-learning does not require the meta-objective to be expressed directly in terms of the learner's objective. Instead, we present an alternative approach that relies on having the meta-learner match a desired target. Here, we bootstrap from the meta-learned update rule itself to produce future targets. While using the meta-learned update rule as the bootstrap allows for an open-ended meta-learning process, some grounding is necessary. As an instance of this approach, we study bootstrapped meta-gradients, which can guarantee performance improvements under appropriate choices of targets and matching functions that can be larger than those of standard meta-gradients. Empirically, we observe substantial improvements on Atari and achieve a new state-of-the-art, while obtaining significant efficiency gains in a multi-task meta-learning setting. We explore new possibilities afforded by the target-matching nature of the algorithm and demonstrate that it can learn to explore in an $\epsilon$ -greedy $Q$ -learning agent. + +# REFERENCES + +Abbas Abdelmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin Riedmiller. Maximum a Posteriori Policy Optimisation. In International Conference on Learning Representations, 2018. +Ferran Alet, Martin F. Schneider, Tomas Lozano-Perez, and Leslie Pack Kaelbling. Meta-Learning Curiosity Algorithms. In International Conference on Learning Representations, 2020. +Shun-Ichi Amari. Natural Gradient Works Efficiently in Learning. Neural computation, 10(2): 251-276, 1998. +Marcin Andrychowicz, Misha Denil, Sergio Gómez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to Learn by Gradient Descent by Gradient Descent. In Advances in Neural Information Processing Systems, 2016. +Antreas Antoniou, Harrison Edwards, and Amos J. Storkey. How to Train Your MAML. In International Conference on Learning Representations, 2019. +Maria-Florina Balcan, Mikhail Khodak, and Ameet Talwalkar. Provable Guarantees for Gradient-Based Meta-Learning. In International Conference on Machine Learning, 2019. +M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The Arcade Learning Environment: An Evaluation Platform for General Agents. Journal of Artificial Intelligence Research, 47:253-279, 2013. +Yoshua Bengio. Gradient-Based Optimization of Hyperparameters. Neural computation, 12(8): 1889-1900, 2000. +Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a Synaptic Learning Rule. Université de Montréal, Département d'informatique et de recherche opérationnelle, 1991. +Y Cao, T Chen, Z Wang, and Y Shen. Learning to Optimize in Swarms. Advances in Neural Information Processing Systems, 2019. +Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap, and Nando de Freitas. Learning to learn for Global Optimization of Black Box Functions. In Advances in Neural Information Processing Systems, 2016. +Giulia Denevi, Dimitris Stamos, Carlo Ciliberto, and Massimiliano Pontil. Online-Within-Online Meta-Learning. In Advances in Neural Information Processing Systems, 2019. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A Large-Scale Hierarchical Image Database. In Computer Vision and Pattern Recognition, 2009. +Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures. In International Conference on Machine Learning, 2018. +Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In International Conference on Machine Learning, 2017. +Sebastian Flennerhag, Pablo G. Moreno, Neil D. Lawrence, and Andreas Damianou. Transferring Knowledge across Learning Processes. In International Conference on Learning Representations, 2019. +Sebastian Flennerhag, Andrei A. Rusu, Razvan Pascanu, Francesco Visin, Hujun Yin, and Raia Hadsell. Meta-Learning with Warped Gradient Descent. In International Conference on Learning Representations, 2020. +Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas L. Griffiths. Recasting Gradient-Based Meta-Learning as Hierarchical Bayes. In International Conference on Learning Representations, 2018. + +Jean-Bastien Grill, Florian Strub, Florent Alché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, koray kavukcuoglu, Remi Munos, and Michal Valko. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning. In Advances in Neural Information Processing Systems, 2020. +Zhaohan Daniel Guo, Bernardo Avila Pires, Bilal Piot, Jean-Bastien Grill, Florent Altché, Rémi Munos, and Mohammad Gheshlaghi Azar. Bootstrap Latent-Predictive Representations for Multitask Reinforcement Learning. In International Conference on Machine Learning, 2020. +Matteo Hessel, Ivo Danihelka, Fabio Viola, Arthur Guez, Simon Schmitt, Laurent Sifre, Theophane Weber, David Silver, and Hado van Hasselt. Muesli: Combining Improvements in Policy Optimization. arXiv preprint arXiv:2104.06159, 2021. +Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the Knowledge in a Neural Network. arXiv preprint arXiv:1503.02531, 2015. +Geoffrey E. Hinton and David C. Plaut. Using Fast Weights to Deblur Old Memories. In Cognitive Science Society, 1987. +Sepp Hochreiter, A. Steven Younger, and Peter R. Conwell. Learning To Learn Using Gradient Descent. In International Conference on Artificial Neural Networks, 2001. +Ghassen Jerfel, Erin Grant, Tom Griffiths, and Katherine A Heller. Reconciling Meta-Learning and Continual Learning with Online Mixtures of Tasks. In Advances in Neural Information Processing Systems, 2019. +Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, and Al Borchers. In-Datacenter Performance Analysis of a Tensor Processing Unit. In International Symposium on Computer Architecture, 2017. +Sham M Kakade. A Natural Policy Gradient. In Advances in Neural Information Processing Systems, 2001. +Steven Kaptuowski, Georg Ostrovski, John Quan, Remi Munos, and Will Dabney. Recurrent Experience Replay in Distributed Reinforcement Learning. In International Conference on Learning Representations, 2018. +Mikhail Khodak, Maria-Florina F Balcan, and Ameet S Talwalkar. Adaptive Gradient-Based Meta-Learning Methods. Advances in Neural Information Processing Systems, 2019. +Louis Kirsch, Sjoerd van Steenkiste, and Jurgen Schmidhuber. Improving Generalization in Meta Reinforcement Learning Using Learned Objectives. arXiv preprint arXiv:1910.04098, 2019. +Yoonho Lee and Seungjin Choi. Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace. In International Conference on Machine Learning, 2018. +Kaifeng Lv, Shunhua Jiang, and Jian Li. Learning Gradient Descent: Better Generalization and Longer Horizons. In International Conference on Machine Learning, 2017. +Marlos C. Machado, Marc G. Bellemare, Erik Talvitie, Joel Veness, Matthew Hausknecht, and Michael Bowling. Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents. Journal of Artificial Intelligence Research, 61:523-562, 2018. +Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient-Based Hyperparameter Optimization Through Reversible Learning. In International conference on machine learning, pp. 2113-2122. PMLR, 2015. +Luke Metz, Niru Maheswaranathan, Jeremy Nixon, Daniel Freeman, and Jascha Sohl-Dickstein. Understanding and Correcting Pathologies in the Training of Learned Optimizers. In International Conference on Machine Learning, 2019. +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing Atari with Deep Reinforcement Learning. arXiv preprint arXiv:1312.5602, 2013. + +Alex Nichol, Joshua Achiam, and John Schulman. On First-Order Meta-Learning Algorithms. arXiv preprint ArXiv:1803.02999, 2018. +Junhyuk Oh, Matteo Hessel, Wojciech M Czarnecki, Zhongwen Xu, Hado P van Hasselt, Satinder Singh, and David Silver. Discovering Reinforcement Learning Algorithms. In Advances in Neural Information Processing Systems, volume 33, 2020. +Eunbyung Park and Junier B Oliva. Meta-Curvature. In Advances in Neural Information Processing Systems, 2019. +Razvan Pascanu and Yoshua Bengio. Revisiting Natural Gradient for Deep Networks. In International Conference on Learning Representations, 2014. +Jing Peng and Ronald J. Williams. Incremental Multi-Step Q-Learning. In International Conference on Machine Learning, 1994. +Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML. In International Conference on Learning Representations, 2020. +Sachin Ravi and Hugo Larochelle. Optimization as a Model for Few-Shot Learning. In International Conference on Learning Representations, 2017. +Esteban Real, Chen Liang, David R. So, and Quoc V. Le. AutoML-Zero: Evolving Machine Learning Algorithms From Scratch. In International Conference on Machine Learning, 2020. +Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy Distillation. arXiv preprint arXiv:1511.06295, 2015. +Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-Learning with Latent Embedding Optimization. In International Conference on Learning Representations, 2019. +Jürgen Schmidhuber. *Evolutionary Principles in Self-Referential Learning*. PhD thesis, Technische Universität München, 1987. +Jürgen Schmidhuber. A 'self-referential' weight matrix. In International Conference on Artificial Neural Networks, pp. 446-450. Springer, 1993. +Simon Schmitt, Matteo Hessel, and Karen Simonyan. Off-Policy Actor-Critic with Shared Experience Replay. In International Conference on Machine Learning, 2020. +Nicol N. Schraudolph. Local Gain Adaptation in Stochastic Gradient Descent. In International Conference on Artificial Neural Networks, 1999. +John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust Region Policy Optimization. In International Conference on Machine Learning, 2015. +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal Policy Optimization Algorithms. arXiv preprint arXiv:1707.06347, 2017. +Elizabeth S. Spelke and Katherine D Kinzler. Core Knowledge. Developmental science, 10(1):89-96, 2007. +Bradly C. Stadie, Ge Yang, Rein Houthooft, Xi Chen, Yan Duan, Yuhuai Wu, Pieter Abbeel, and Ilya Sutskever. Some Considerations on Learning to Explore via Meta-Reinforcement Learning. In Advances in Neural Information Processing Systems, 2018. +Richard S. Sutton. Learning to Predict by the Methods of Temporal Differences. Machine learning, 3(1):9-44, 1988. +Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy Gradient Methods for Reinforcement Learning with Function Approximation. In Advances in Neural Information Processing Systems, volume 99, 1999. + +Yee Whye Teh, Victor Bapst, Wojciech M Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. Distral: Robust Multitask Reinforcement Learning. In Advances in Neural Information Processing Systems, 2017. +Manan Tomar, Lior Shani, Yonathan Efroni, and Mohammad Ghavamzadeh. Mirror Descent Policy Optimization. arXiv preprint arXiv:2005.09814, 2020. +Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, and Pierre-Antoine Manzagol. Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples. International Conference on Learning Representations, 2020. +Tim van Erven and Wouter M Koolen. MetaGrad: Multiple Learning Rates in Online Learning. In Advances in Neural Information Processing Systems, 2016. +Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching Networks for One Shot Learning. In Advances in Neural Information Processing Systems, 2016. +Jane X. Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, Rémi Munos, Charles Blundell, Dharshan Kumaran, and Matthew Botvinick. Learning to Reinforcement Learn. In Annual Meeting of the Cognitive Science Society, 2016. +Olga Wichrowska, Niru Maheswaranathan, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Nando de Freitas, and Jascha Sohl-Dickstein. Learned Optimizers that Scale and Generalize. In International Conference on Machine Learning, 2017. +Ronald J Williams and Jing Peng. Function Optimization using Connectionist Reinforcement Learning Algorithms. Connection Science, 3(3):241-268, 1991. +Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger B. Grosse. Understanding Short-Horizon Bias in Stochastic Meta-Optimization. In International Conference on Learning Representations, 2018. +Zhongwen Xu, Hado P. van Hasselt, and David Silver. Meta-Gradient Reinforcement Learning. In Advances in Neural Information Processing Systems, 2018. +Mingzhang Yin, George Tucker, Mingyuan Zhou, Sergey Levine, and Chelsea Finn. Meta-Learning without Memorization. In International Conference on Learning Representations, 2020. +Tom Zahavy, Zhongwen Xu, Vivek Veeriah, Matteo Hessel, Junhyuk Oh, Hado P van Hasselt, David Silver, and Satinder Singh. A Self-Tuning Actor-Critic Algorithm. Advances in Neural Information Processing Systems, 33, 2020. +Zeyu Zheng, Junhyuk Oh, and Satinder Singh. On Learning Intrinsic Rewards for Policy Gradient Methods. Advances in Neural Information Processing Systems, 2018. +Luisa Zintgraf, Kyriacos Shiarli, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Fast Context Adaptation via Meta-Learning. In International Conference on Machine Learning, 2019. + +# Bootstrapped Meta-Learning: Appendix + +# CONTENTS + +Appendix A: proofs accompanying Section 4. + +Appendix B: non-stationary Grid-World (Section 5.1). + +Appendix C: ALE Atari (Section 5.2). + +Appendix D: Multi-task meta-learning, Few-Shot Learning on MiniImagenet (Section 6). + +# A PROOFS + +This section provides complete proofs for the results in Section 4. Throughout, we assume that $(\mathbf{x}^{(0)},\mathbf{h}^{(0)},\mathbf{w})$ is given and write $\mathbf{x}\coloneqq \mathbf{x}^{(0)}$ , $\mathbf{h}\coloneqq \mathbf{h}^{(0)}$ . We assume that $\mathbf{h}$ evolves according to some process that maps a history $H^{(k)}\coloneqq (\mathbf{x}^{(0)},\mathbf{h}^{(0)},\ldots ,\mathbf{x}^{(k - 1)},\mathbf{h}^{(k - 1)},\mathbf{x}^{(k)})$ into a new learner state $\mathbf{h}^{(k)}$ , including any sampling of data (c.f. Section 3). Recall that we restrict attention to the noiseless setting, and hence updates are considered in expectation. We define the map $\mathbf{x}^{(K)}(\mathbf{w})$ by + +$$ +\begin{array}{l} \mathbf {x} ^ {(1)} = \mathbf {x} ^ {(0)} + \varphi \left(\mathbf {x} ^ {(0)}, \mathbf {h} ^ {(0)}, \mathbf {w}\right) \\ \mathbf {x} ^ {(2)} = \mathbf {x} ^ {(1)} + \varphi \left(\mathbf {x} ^ {(1)}, \mathbf {h} ^ {(1)}, \mathbf {w}\right) \\ \end{array} +$$ + +: + +$$ +\mathbf {x} ^ {(K)} = \mathbf {x} ^ {(K - 1)} + \varphi \big (\mathbf {x} ^ {(K - 1)}, \mathbf {h} ^ {(K - 1)}, \mathbf {w} \big). +$$ + +The derivative $\frac{\partial}{\partial\mathbf{w}}\mathbf{x}^{(K)}(\mathbf{w})$ differentiates through each step of this process (Hochreiter et al., 2001). As previously stated, we assume $f$ is Lipschitz and that $\mathbf{x}^{(K)}$ is Lipschitz w.r.t. $\mathbf{w}$ . We are now in a position to prove results from the main text. We re-state them for convenience. + +Lemma 1 (MG Descent). Let $\mathbf{w}'$ be given by Eq. 1. For $\beta$ sufficiently small, $f\big(\mathbf{x}^{(K)}(\mathbf{w}')\big) - f\big(\mathbf{x}^{(K)}(\mathbf{w})\big) = -\beta \| \nabla_x f(\mathbf{x}^{(K)})\|_{G^T}^2 + o(\beta^2) < 0$ . + +Proof. Define $\mathbf{g} \coloneqq \nabla_{\boldsymbol{x}} f(\mathbf{x}^{(K)}(\mathbf{w}))$ . The meta-gradient at $(\mathbf{x}, \mathbf{h}, \mathbf{w})$ is given by $\nabla_{\boldsymbol{w}} f(\mathbf{x}^{(K)}(\mathbf{w})) = D\mathbf{g}$ . Under Eq. 1, we find $\mathbf{w}' = \mathbf{w} - \beta D\mathbf{g}$ . By first-order Taylor Series Expansion of $f$ around $(\mathbf{x}, \mathbf{h}, \mathbf{w}')$ with respect to $\mathbf{w}$ : + +$$ +\begin{array}{l} f \left(\mathbf {x} ^ {(K)} \left(\mathbf {w} ^ {\prime}\right)\right) = f \left(\mathbf {x} ^ {(K)} (\mathbf {w})\right) + \langle D \mathbf {g}, \mathbf {w} ^ {\prime} - \mathbf {w} \rangle + o \left(\beta^ {2} \| \mathbf {g} \| _ {G ^ {T}} ^ {2} \right. \\ = f \left(\mathbf {x} ^ {(K)} (\mathbf {w})\right) - \beta \langle D \mathbf {g}, D \mathbf {g} \rangle + o \left(\beta^ {2} \| \mathbf {g} \| _ {G ^ {T}} ^ {2}\right) \\ = f \big (\mathbf {x} ^ {(K)} (\mathbf {w}) \big) - \beta \| \mathbf {g} \| _ {G ^ {T}} ^ {2} + o \big (\beta^ {2} \| \mathbf {g} \| _ {G ^ {T}} ^ {2} \big), \\ \end{array} +$$ + +with $\| \mathbf{g}\|_{GT}^2\geq 0$ by virtue of positive semi-definiteness of $G$ . Hence, for $\beta^2$ small the residual vanishes and the conclusion follows. + +Theorem 1 (BMG Descent). Let $\tilde{\mathbf{w}}$ be given by Eq. 2 for some $TB\xi$ . The BMG update satisfies + +$$ +f \big (\mathbf {x} ^ {(K)} (\tilde {\mathbf {w}}) \big) - f \big (\mathbf {x} ^ {(K)} (\mathbf {w}) \big) = \frac {\beta}{\alpha} \left(\mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)} - \alpha G ^ {T} \mathbf {g}) - \mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)})\right) + o (\beta (\alpha + \beta)). +$$ + +For $(\alpha, \beta)$ sufficiently small, there exists infinitely many $\xi$ for which $f\big(\mathbf{x}^{(K)}(\tilde{\mathbf{w}})\big) - f\big(\mathbf{x}^{(K)}(\mathbf{w})\big) < 0$ . In particular, $\xi(\mathbf{x}^{(K)}) = \mathbf{x}^{(K)} - \alpha G^T \mathbf{g}$ yields improvements + +$$ +f \big (\mathbf {x} ^ {(K)} (\tilde {\mathbf {w}}) \big) - f \big (\mathbf {x} ^ {(K)} (\mathbf {w}) \big) = - \frac {\beta}{\alpha} \mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)}) + o (\beta (\alpha + \beta)) < 0. +$$ + +This is not an optimal rate; there exists infinitely many TBs that yield greater improvements. + +Proof. The bootstrapped meta-gradient at $(\mathbf{x},\mathbf{h},\mathbf{w})$ is given by + +$$ +\nabla_ {w} \mu \Big (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)} (\mathbf {w}) \Big) = D \mathbf {u}, \quad \text {w h e r e} \quad \mathbf {u} := \nabla_ {z} \mu \big (\tilde {\mathbf {x}}, \mathbf {z} \big) \Big | _ {\mathbf {z} = \mathbf {x} ^ {(K)}}. +$$ + +Under Eq. 2, we find $\tilde{\mathbf{w}} = \mathbf{w} - \beta D\mathbf{u}$ . Define $\mathbf{g} \coloneqq \nabla_{x}f(\mathbf{x}^{(K)})$ . By first-order Taylor Series Expansion of $f$ around $(\mathbf{x},\mathbf{h},\tilde{\mathbf{w}})$ with respect to $\mathbf{w}$ : + +$$ +\begin{array}{l} f \left(\mathbf {x} ^ {(K)} (\tilde {\mathbf {w}})\right) = f \left(\mathbf {x} ^ {(K)} (\mathbf {w})\right) + \langle D \mathbf {g}, \tilde {\mathbf {w}} - \mathbf {w} \rangle + o \left(\beta^ {2} \| D \mathbf {u} \| _ {2} ^ {2}\right) \\ = f \left(\mathbf {x} ^ {(K)} (\mathbf {w})\right) - \beta \langle D \mathbf {g}, D \mathbf {u} \rangle + o \left(\beta^ {2} \| D \mathbf {u} \| _ {2} ^ {2}\right) \\ = f \left(\mathbf {x} ^ {(K)} (\mathbf {w})\right) - \beta \langle \mathbf {u}, G ^ {T} \mathbf {g} \rangle + o \left(\beta^ {2} \| \mathbf {u} \| _ {G ^ {T}} ^ {2}\right). \tag {5} \\ \end{array} +$$ + +To bound the inner product, expand $\mu (\tilde{\mathbf{x}},\cdot)$ around a point $\mathbf{x}^{(K)} + \mathbf{d}$ , where $\mathbf{d}\in \mathbb{R}^{n_x}$ , w.r.t. $\mathbf{x}^{(K)}$ : + +$$ +\mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)} + \mathbf {d}) = \mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)}) + \langle \mathbf {u}, \mathbf {d} \rangle + o (\| \mathbf {d} \| _ {2} ^ {2}). +$$ + +Thus, choose $\mathbf{d} = -\alpha G^T\mathbf{g}$ , for some $\alpha \in \mathbb{R}_+$ and rearrange to get + +$$ +- \beta \langle \mathbf {u}, G ^ {T} \mathbf {g} \rangle = \frac {\beta}{\alpha} \left(\mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)} - \alpha G ^ {T} \mathbf {g}) - \mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)})\right) + o (\alpha \beta \| \mathbf {g} \| _ {G ^ {T}} ^ {2}). +$$ + +Substitute into Eq. 5 to obtain + +$$ +\begin{array}{l} f \left(\mathbf {x} ^ {(K)} (\tilde {\mathbf {w}})\right) - f \left(\mathbf {x} ^ {(K)} (\mathbf {w})\right) = \frac {\beta}{\alpha} \left(\mu \left(\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)} - \alpha G ^ {T} \mathbf {g}\right) - \mu \left(\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)}\right)\right) \tag {6} \\ + o (\alpha \beta \| \mathbf {g} \| _ {G ^ {T}} ^ {2} + \beta^ {2} \| \mathbf {u} \| _ {G ^ {T}} ^ {2}). \\ \end{array} +$$ + +Thus, the BMG update comes out as the difference between to distances. The first distance is a distortion terms that measures how well the target aligns to the tangent vector $-G^{T}\mathbf{g}$ , which is the direction of steepest descent in the immediate vicinity of $\mathbf{x}^{(K)}$ (c.f. Lemma 1). The second term measures learning; greater distance carry more signal for meta-learning. The two combined captures the inherent trade-off in BMG; moving the target further away increases distortions from curvature, but may also increase the learning signal. Finally, the residual captures distortions due to curvature. + +Existence. To show that there always exists a target that guarantees a descent direction, choose $\tilde{\mathbf{x}} = \mathbf{x}^{(K)} - \alpha G^T\mathbf{g}$ . This eliminates the first distance in Eq. 6 as the target is perfectly aligned the direction of steepest descent and we obtain + +$$ +f \big (\mathbf {x} ^ {(K)} (\tilde {\mathbf {w}}) \big) - f \big (\mathbf {x} ^ {(K)} (\mathbf {w}) \big) = - \frac {\beta}{\alpha} \mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)}) + o (\beta (\alpha + \beta)). +$$ + +The residual vanishes exponentially fast as $\alpha$ and $\beta$ go to 0. Hence, there is some $(\bar{\alpha},\bar{\beta})\in \mathbb{R}_{+}^{2}$ such that for any $(\alpha ,\beta)\in (0,\bar{\alpha})\times (0,\bar{\beta})$ , $f\big(\mathbf{x}^{(K)}(\tilde{\mathbf{w}})\big) - f\big(\mathbf{x}^{(K)}(\mathbf{w})\big) < 0$ . For any such choice of $(\alpha ,\beta)$ , by virtue of differentiability in $\mu$ there exists some neighborhood $N$ around $\mathbf{x}^{(K)} - \alpha G^T\mathbf{g}$ for which any $\tilde{\mathbf{x}}\in N$ satisfy $f\big(\mathbf{x}^{(K)}(\tilde{\mathbf{w}})\big) - f\big(\mathbf{x}^{(K)}(\mathbf{w})\big) < 0$ . + +Efficiency. We are to show that, given $(\alpha, \beta)$ , the set of optimal targets does not include $\tilde{\mathbf{x}} = \mathbf{x}^{(K)} - \alpha G^T \mathbf{g}$ . To show this, it is sufficient to demonstrate that this is not a local minimum of the right hand-side in Eq. 6. Indeed, + +$$ +\begin{array}{l} \nabla_ {\tilde {x}} \left. \left(\frac {\beta}{\alpha} \left(\mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)} - \alpha G ^ {T} \mathbf {g}) - \mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)})\right) + o (\alpha \beta \| \mathbf {g} \| _ {G ^ {T}} ^ {2} + \beta^ {2} \| \mathbf {u} \| _ {G ^ {T}} ^ {2})\right) \right| _ {\tilde {\mathbf {x}} = \mathbf {x} ^ {(K)} - \alpha G ^ {T} \mathbf {g}} \\ = - \frac {\beta}{\alpha} \nabla_ {\tilde {\mathbf {x}}} \left. \mu (\tilde {\mathbf {x}}, \mathbf {x} ^ {(K)}) \right| _ {\tilde {\mathbf {x}} = \mathbf {x} ^ {(K)} - \alpha G ^ {T} \mathbf {g}} + \beta^ {2} \mathbf {o} \neq \mathbf {0}, \\ \end{array} +$$ + +where $\beta^2\mathbf{o}$ is the gradient of the residual ( $\| \mathbf{u}\| _2^2$ depends on $\tilde{\mathbf{x}}$ ) w.r.t. $\tilde{\mathbf{x}} = \mathbf{x}^{(K)} - \alpha G^T\mathbf{g}$ . To complete the proof, let $\tilde{\mathbf{u}}$ denote the above gradient. Construct an alternative target $\tilde{\mathbf{x}}' = \tilde{\mathbf{x}} -\eta \tilde{\mathbf{u}}$ for some $\eta \in \mathbb{R}_+$ . By standard gradient descent argument, there is some $\bar{\eta}$ such that any $\eta \in (0,\bar{\eta})$ yields an alternate target $\tilde{\mathbf{x}}'$ that improves over $\tilde{\mathbf{x}}$ . + +We now prove that, controlling for scale, BMG can yield larger performance gains than MG. Recall that $\xi_G^\alpha (\mathbf{x}^{(K)}) = \mathbf{x}^{(K)} - \alpha G^T\nabla f\mathbf{x}^{(K)}$ . Consider $\xi_G^r$ , with $r\coloneqq \| \nabla f(\mathbf{x}^{(K)})\| _2 / \| G^T\nabla f(\mathbf{x}^{(K)})\| _2$ + +Corollary 1. Let $\mu = \| \cdot \|_2^2$ and $\tilde{\mathbf{x}} = \xi_G^r (\mathbf{x}^{(K)})$ . Let $\mathbf{w}'$ be given by Eq. 1 and $\tilde{\mathbf{w}}$ be given by Eq. 2. For $\beta$ sufficiently small, $f\big(\mathbf{x}^{(K)}(\tilde{\mathbf{w}})\big) \leq f\big(\mathbf{x}^{(K)}(\mathbf{w}')\big)$ , with strict inequality if $GG^T \neq G^T$ . + +Proof. Let $\mathbf{g} \coloneqq \nabla_x f\big(\mathbf{x}^{(K)}\big)$ . By Lemma 1, $f\big(\mathbf{x}^{(K)}(\mathbf{w}')\big) - f\big(\mathbf{x}^{(K)}(\mathbf{w})\big) = -\beta \langle G^T \mathbf{g}, \mathbf{g} \rangle + O(\beta^2)$ . From Theorem 1, with $\mu = \| \cdot \|_2^2$ , $f\big(\mathbf{x}^{(K)}(\tilde{\mathbf{w}})\big) - f\big(\mathbf{x}^{(K)}(\mathbf{w})\big) = -r\langle G^T \mathbf{g}, G^T \mathbf{g} \rangle + O(\beta (\alpha + \beta))$ . For $\beta$ sufficiently small, the inner products dominate and we have + +$$ +f \big (\mathbf {x} ^ {(K)} (\tilde {\mathbf {w}}) \big) - f \big (\mathbf {x} ^ {(K)} (\mathbf {w} ^ {\prime}) \big) \approx - \beta \left(r \langle G ^ {T} \mathbf {g}, G ^ {T} \mathbf {g} \rangle - \langle G ^ {T} \mathbf {g}, \mathbf {g} \rangle\right). +$$ + +To determine the sign of the expression in parenthesis, consider the problem + +$$ +\max _ {\mathbf {v} \in \mathbb {R} ^ {n _ {x}}} \left\langle G ^ {T} \mathbf {g}, \mathbf {v} \right\rangle \quad \text {s . t .} \quad \| \mathbf {v} \| _ {2} \leq 1. +$$ + +Form the Lagrangian $\mathcal{L}(\mathbf{v},\lambda)\coloneqq \langle G^T\mathbf{g},\mathbf{v}\rangle -\lambda (\| \mathbf{v}\| _2 - 1)$ . Solve for first-order conditions: + +$$ +G ^ {T} \mathbf {g} - \lambda \frac {\mathbf {v} ^ {*}}{\| \mathbf {v} ^ {*} \| _ {2}} = 0 \Rightarrow \mathbf {v} ^ {*} = \frac {\| \mathbf {v} ^ {*} \| _ {2}}{\lambda} G ^ {T} \mathbf {g}. +$$ + +If $\lambda = 0$ , then we must have $\| \mathbf{v}^* \|_2 0$ , which clearly is not an optimal solution. Complementary slackness then implies $\| \mathbf{v}^* \|_2 = 1$ , which gives $\lambda = \| \mathbf{v}^* \|_2 \| G^T \mathbf{g} \|_2$ and hence $\mathbf{v}^* = G^T \mathbf{g} / \| G^T \mathbf{g} \|_2$ . By virtue of being the maximiser, $\mathbf{v}^*$ attains a higher function value than any other $\mathbf{v}$ with $\| \mathbf{v} \|_2 \leq 1$ , in particular $\mathbf{v} = \mathbf{g} / \| \mathbf{g} \|_2$ . Evaluating the objective at these two points gives + +$$ +\frac {\left\langle G ^ {T} \mathbf {g} , G ^ {T} \mathbf {g} \right\rangle}{\| G ^ {T} \mathbf {g} \| _ {2}} \geq \frac {\left\langle G ^ {T} \mathbf {g} , \mathbf {g} \right\rangle}{\| \mathbf {g} \| _ {2}} \Rightarrow r \langle G ^ {T} \mathbf {g}, G ^ {T} \mathbf {g} \rangle \geq \langle G ^ {T} \mathbf {g}, \mathbf {g} \rangle , +$$ + +where we use that $r = \|\mathbf{g}\|_2 / \|G^T\mathbf{g}\|_2$ by definition. Thus $f\big(\mathbf{x}^{(K)}(\tilde{\mathbf{w}})\big) \leq f\big(\mathbf{x}^{(K)}(\mathbf{w}')\big)$ , with strict inequality if $GG^T \neq G^T$ and $G^T\mathbf{g} \neq \mathbf{0}$ . + +# B NON-STATIONARY NON-EPISODIC REINFORCEMENT LEARNING + +# B.1 SETUP + +This experiment is designed to provide a controlled setting to delineate the differences between standard meta-gradients and bootstrapped meta-gradients. The environment is a $5 \times 5$ grid world with two objects; a blue and a red square (Figure 7). Thus, we refer to this environment as the two-colors domain. At each step, the agent (green) can take an action to move either up, down, left, or right and observes the position of each square and itself. If the agent reaches a coloured square, it obtains a reward of either $+1$ or $-1$ while the colour is randomly moved to an unoccupied location. Every 100 000 steps, the reward for each object flips. For all other transitions, the agent obtains a reward of $-0.04$ . Observations are constructed by concatenating one-hot encodings of the each $x$ - and $y$ -coordinate of the two colours and the agent's position, with a total dimension of $2 \times 3 \times 5 = 30$ (two coordinates for each of three objects, with each one-hot vector being 5-dimensional). + +The two-colors domain is designed such that the central component determining how well a memory-less agent adapts is its exploration. Our agents can only regulate exploration through policy entropy. Thus, to converge on optimal task behaviour, the agent must reduce policy entropy. Once the task switches, the agent encounters what is effectively a novel task (due to it being memory-less). To rapidly adapt + +the agent must first increase entropy in the policy to cover the state-space. Once the agent observe rewarding behaviour, it must then reduce entropy to converge on task-optimal behaviour. + +![](images/77b4cdcbcf1a80fdc8b24ceb87e75df52c4718593b9ae8be5d36856dde3dc4b3.jpg) +Figure 7: Two-colors Grid-world. The agent's goal is to collect either blue or red squared by navigating the green square. + +All experiments run on the CPU of a single machine. The agent interacts with the environment and update its parameters synchronously in a single stream of experience. A step is thus comprised of the following operations, in order: (1) given observation, agent takes action, (2) if applicable, agent update its parameters, (3) environment transitions based on action and return new observation. The parameter update step is implemented differently depending on the agent, described below. + +Algorithm 1 N-step RL actor loop +Require: $N$ Rollout length. +Require: $\mathbf{x}\in \mathbb{R}^{n_x}$ Policy parameters. +Require: s Environment state. + $\mathcal{B}\gets (\mathbf{s})$ Initialise rollout. +for $t = 1,2,\dots ,N$ do Sample action. +a $\sim \pi_{\mathbf{x}}(\mathbf{s})$ $\mathbf{\sigma}_{s,r}\leftarrow \mathrm{env}(\mathbf{s},\mathbf{a})$ Take a step in environment. + $\mathcal{B}\gets \mathcal{B}\cup (\mathbf{a},r,\mathbf{s})$ Add to rollout. +end for +return s, $\mathcal{B}$ + +Algorithm 2 $K$ -step online learning loop +Require: $N,K$ Rollout length, meta-update length. +Require: $\mathbf{x}\in \mathbb{R}^{n_x},\mathbf{z}\in \mathbb{R}^{n_z},\mathbf{w}\in \mathbb{R}^{n_w}$ Policy, value function, and meta parameters. +Require: s Environment state. for $k = 1,2,\ldots ,K$ do s, $\mathcal{B}\gets$ ActorLoop(x,s,N) Algorithm 1. $(\mathbf{x},\mathbf{z})\gets \varphi ((\mathbf{x},\mathbf{z}),\mathcal{B},\mathbf{w})$ Inner update step. end for return s,x,z,B + +Algorithm 3 Online RL with BMG +Require: $N,K,L$ Rollout length, meta-update length, bootstrap length. +Require: $\mathbf{x}\in \mathbb{R}^{n_x},\mathbf{z}\in \mathbb{R}^{n_z},\mathbf{w}\in \mathbb{R}^{n_w}$ Policy, value function, and meta parameters. +Require: s Environment state. +u $\leftarrow$ (x,z) while True do s,uK,- $\leftarrow$ InnerLoop(u,w,s,N,K) K-step inner loop, Algorithm 2. s,u(K+L-1),B $\leftarrow$ InnerLoop(u(K),w,s,N,L-1) L-1 bootstrap, Algorithm 2. u $\leftarrow$ u(K+L-1)-αVu(uk+L-1),B Gradient step on objective L. w $\leftarrow$ w-βVwμ(til,u(K)(w)) BG outer step. u $\leftarrow$ u(K+L-1) Continue from most resent parameters. +end while + +# B.2 ACTOR-CRITIC EXPERIMENTS + +Agent The first agent we evaluate is a simple actor-critic which implements a softmax policy $(\pi_{\mathbf{x}})$ and a critic $(v_{\mathbf{z}})$ using separate feed-forward MLPs. Agent parameter updates are done according to the actor-critic loss in Eq. 4 with the on-policy n-step return target. For a given parameterisation of the agent, we interact with the environment for $N = 16$ steps, collecting all observations, rewards, and actions into a rollout (Algorithm 1). When the rollout is full, the agent update its parameters under the actor-critic loss with SGD as the optimiser (Algorithm 2). To isolate the effect of meta-learning, all hyper-parameters except the entropy regularization weight $(\epsilon = \epsilon_{\mathrm{EN}})$ are fixed (Table 1); for each agent, we sweep for the learning rate that yields highest cumulative reward within a 10 million step budget. For the non-adaptive baseline, we additionally sweep for the best regularization weight. + +Meta-learning To meta-learn the entropy regularization weight, we introduce a small MLP with meta-parameters $\mathbf{w}$ that ingests a statistic $\mathbf{t}$ of the learning process—the average reward over each of the 10 most recent rollouts—and predicts the entropy rate $\epsilon_{\mathbf{w}}(\mathbf{t}) \in \mathbb{R}_+$ to use in the agent's parameter update of $\mathbf{x}$ . To compute meta-updates, for a given horizon $T = K$ or $T = K + (L - 1)$ , we fix $\mathbf{w}$ and make $T$ agent parameter updates to obtain a sequence $(\tau_1, \mathbf{x}^{(1)}, \mathbf{z}^{(1)}, \ldots, \tau_T, \mathbf{x}^{(T)}, \mathbf{z}^{(T)})$ . + +![](images/4b1df2b07f970d735cd5e6053723cb337513f3f6e44e0ff8380df0aa1a8a57a6.jpg) + +![](images/d41474c7df5282cb6fa6e4389680960bb4e0860323546ae0b5485d65919609db.jpg) + +![](images/d96d92bedf9ad0a928203cf84fa146b048ac898227990319a5bef212fadeb94d.jpg) + +![](images/b351d915d6fdf1a053a851c0aadd25ca17bbd34c1a046340d692e9594d71a688.jpg) +(a) Fixed entropy-regularization +(d) Bootstrapped meta-gradients + +![](images/77bb1eed2df016d41bb4daa86b29a422414a7774054107417abb0514fc490e5c.jpg) +(b) Meta-gradients +(c) Meta-gradients + regularization + +![](images/dcca35916b5ca2aa33840d593336b9b7b457d98abbae33b60791388804c1d71d.jpg) +Figure 8: Total rewards on two-colors with actor-critics. Shading: standard deviation over 50 seeds. + +MG is optimised by averaging each policy and entropy loss encountered in the sequence, i.e. the meta-objective is given by $\frac{1}{T}\sum_{t=1}^{T}\ell_{\mathrm{PG}}^{t}(\mathbf{x}^{(t)}(\mathbf{w})) + \epsilon_{\mathrm{meta}}\ell_{\mathrm{EN}}^{t}(\mathbf{x}^{(t)}(\mathbf{w}))$ , where $\epsilon_{\mathrm{meta}} \in \{0,0.1\}$ is a fixed hyper-parameter and $\ell^t$ implies that the objective is computed under $\tau_t$ . + +BMG is optimised by computing the matching loss $\mu_{\tau_T}(\tilde{\mathbf{x}},\mathbf{x}^{(K)}(\mathbf{w}))$ , where $\tilde{\mathbf{x}}$ is given by $\tilde{\mathbf{x}} = \mathbf{x}^{(T)} - \beta \nabla_x(\ell_{\mathrm{PG}}^T (\mathbf{x}^{(T)}) + \epsilon_{\mathrm{meta}}\ell_{\mathrm{EN}}^T (\mathbf{x}^{(T)}))$ . That is to say, the TB "unrolls" the meta-learner for $L - 1$ steps, starting from $(\mathbf{x}^{(K)},\mathbf{z}^{(K)})$ , and takes a final policy-gradient step ( $\epsilon_{\mathrm{meta}} = 0$ unless otherwise noted). Thus, in this setting, our TB exploits that the first ( $L - 1$ ) steps have already been taken by the agent during the course of learning (Algorithm 3). Moreover, the final $L$ th step only differs in the entropy regularization weight, and can therefore be implemented without an extra gradient computation. As such, the meta-update under BMG exhibit no great computational overhead to the MG update. In practice, we observe no significant difference in wall-clock speed for a given $K$ . + +Main experiment: detailed results The purpose of our main experiment Section 5.1 is to (a) test whether larger meta-learning horizons—particularly by increasing $L$ —can mitigate the short-horizon bias, and (b) test whether the agent can learn an exploration schedule without explicit domain knowledge in the meta-objective (in the form of entropy regularization). As reported in Section 5.1, we find the answer to be affirmative in both cases. To shed further light on these findings, Figure 8 + +![](images/e4709ddceb03ba5c9c6b379a5485eed7b53faa2c198d3a596d44556979bf9531.jpg) + +![](images/1712aa46587d885e9dd8c3c635c9e4a6adaeb47ad053873c27284fa17191db62.jpg) +Figure 9: Range of the entropy of a softmax-policy over time (2-colors). Each shaded area shows the difference between the entropy 3333 steps after the agent observes a new entropy and the entropy after training on the reward-function for 100000 steps. Meta-gradients without explicit entropy regularization (left) reduce entropy over time while Bootstrapped meta-gradients (right) maintain entropy with a large enough meta-learning horizon. Averaged across 50 seeds. + +![](images/54e64bd2aca2205cf2a82f0f8bcfbe7c0f2b2f8a142c0c5b49fe09a132f97460.jpg) +Figure 10: Ablations for actor-critic agent with BMG. Each shaded area shows the range of entropy regularization weights generated by the meta-learner. The range is computed as the difference between $\epsilon$ at the beginning and end of each reward-cycle. Left: entropy regularization weight range when $K = 1$ and $L = 7$ . Center: entropy regularization weight range when $K = 1$ and $L = 1$ . Right: For $K = 1$ effect of increasing $L$ with or without meta-entropy regularization. Result aggregated over 50 seeds. + +![](images/136239bcaf392a4064fab5df6c2feb2f44639cbfea2dae8895ea508ed03ab7e8.jpg) + +![](images/56b626734434937656216c38ab80846b4c64c5c0efe8a5d620df6170d5f5040e.jpg) + +reports cumulative reward curves for our main experiment in Section 5.1. We note that MG tends to collapse for any $K$ unless the meta-objective is explicitly regularized via $\epsilon_{\mathrm{meta}}$ . To characterise why MG fail for $\epsilon_{\mathrm{meta}} = 0$ , Figure 9 portrays the policy entropy range under either MG or BMG. MG is clearly overly myopic by continually shrinking the entropy range, ultimately resulting in a non-adaptive policy. + +Ablation: meta-regularization To fully control for the role of meta-regularization, we conduct further experiments by comparing BMG with and without entropy regularization (i.e. $\epsilon_{\mathrm{meta}}$ ) in the $L$ th target update step. Figure 10 demonstrates that BMG indeed suffers from myopia when $L = 1$ , resulting in a collapse of the entropy regularization weight range. However, increasing the meta-learning horizon by setting $L = 7$ obtains a wide entropy regularization weight range. While adding meta-regularization does expand the range somewhat, the difference in total return is not statistically significant (right panel, Figure 10). + +![](images/4436972691a11c4e7a42dff5ee1e3914a2d26307999904796043b064ef4acd2c.jpg) +Figure 11: Total reward on two-colors with an actor-critic agent and different matching functions for BMG. Shading: standard deviation over 50 seeds. + +Ablation: target bootstrap Our main TB takes $L - 1$ steps under the meta-learned update rule, i.e. the meta-learned entropy regularization weight schedule, and an $L$ th policy-gradient step without entropy regularization. In this ablation, we very that taking a final step under a + +different update rule is indeed critical. Figure 10 shows that, for $K = 1$ and $L \in \{1,7\}$ , using the meta-learned update rule for all target update steps leads to a positive feedback loop that results in maximal entropy regularization, leading to a catastrophic loss of performance (right panel, Figure 10). + +Ablation: matching function Finally, we control for different choices of matching function. Figure 11 contrasts the mode-covering version, KL-1, with the mode-seeking version, KL-2, as well as the symmetric KL. We observe that, in this experiment, this choice is not as significant as in other experiments. However, as in Atari, we find a mode-covering version to perform slightly better. + +# B.3 Q-LEARNING EXPERIMENTS + +Agent In this experiment, we test Peng's $Q(\lambda)$ (Peng & Williams, 1994) agent with $\varepsilon$ -greedy exploration. The agent implements a feed-forward MLP to represent a Q-function $q_{\mathbf{x}}$ that is optimised online. Thus, agent parameter update steps do not use batching but is done online (i.e. on each step). To avoid instability, we use a momentum term that maintains an Exponentially Moving Average (EMA) over the agent parameter gradient. In this experiment we fix all hyper-parameters of the update rule (Table 1) and instead focuses on meta-learned $\varepsilon$ -greedy exploration. + +![](images/1db53f31a61cff3a076b790b92c215b7191606159424003ed4686899d8d87a4f.jpg) +Figure 12: Results on two-colors under a $Q(\lambda)$ agent with meta-learned $\varepsilon$ -greedy exploration under BMG. Averaged over 50 seeds. + +![](images/1bc6ccbc327eabf22c752f131deff71d210c5b3aa897485c357e39bbb4efe432.jpg) + +![](images/afd48c0c16da18647270426b6ea34df691ae02788839ee77645d37983dc24ca0.jpg) + +BMG We implement BMG in a similar fashion to the actor-critic case. The meta-learner is represented by a smaller MLP $\varepsilon_{\mathbf{w}}(\cdot)$ with meta-parameters $\mathbf{w}$ that ingests the last 50 rewards, denoted by $\mathbf{t}$ , and outputs the $\varepsilon$ to use on the current time-step. That is to say, given meta-parameters $\mathbf{w}$ , the agent's policy is defined by + +$$ +\pi_ {\mathbf {x}} (\mathbf {a} \mid \mathbf {s} _ {t}, \mathbf {t} _ {t}, \mathbf {w}) = \left\{ \begin{array}{l l} 1 - \varepsilon_ {\mathbf {w}} (\mathbf {t} _ {t}) + \frac {\varepsilon_ {\mathbf {w}} (\mathbf {t} _ {t})}{| A |} & \text {i f} \quad \mathbf {a} = \arg \max _ {\mathbf {b}} q _ {\mathbf {x}} (\mathbf {s} _ {t}, \mathbf {b}) \\ \frac {\varepsilon_ {\mathbf {w}} (\mathbf {t} _ {t})}{| A |} & \text {e l s e .} \end{array} \right. +$$ + +Policy-matching This policy can be seen as a stochastic policy which takes the Q-maximizing action with probability $1 - \varepsilon$ and otherwise picks an action uniformly at random. The level of entropy in this policy is regulated by the meta-learner. We define a TB by defining a target policy under $q_{\tilde{\mathbf{x}}}$ , where $\tilde{\mathbf{x}}$ is given by taking $L$ update steps. Since there are no meta-parameters in the update rule, all $L$ steps use the same update rule. However, we define the target policy as the greedy policy + +$$ +\pi_ {\tilde {\mathbf {x}}} (\mathbf {a} \mid \mathbf {s} _ {t}) = \left\{ \begin{array}{l l} 1 & \text {i f} \quad \mathbf {a} = \arg \max _ {\mathbf {b}} q _ {\tilde {\mathbf {x}}} (\mathbf {s} _ {t}, \mathbf {b}) \\ 0 & \text {e l s e .} \end{array} \right. +$$ + +The resulting BMG update is simple: minimize the KL-divergence $\mu^{\pi}(\tilde{\mathbf{x}},\mathbf{x})\coloneqq \mathrm{KL}\left(\pi_{\tilde{\mathbf{x}}}\parallel \pi_{\mathbf{x}}\right)$ by adjusting the entropy in $\pi_{\mathbf{x}}$ through $\varepsilon_{\mathbf{w}}$ . Thus, policy-matching under this target encourages the meta-learner to match a greedy policy-improvement operation on a target $q_{\tilde{\mathbf{x}}}$ that has been trained for a further $L$ steps. More specifically, if $\arg \max_{\mathbf{b}}q_{\tilde{\mathbf{x}}}(\mathbf{s},\mathbf{b}) = \arg \max_{\mathbf{b}}q_{\mathbf{x}}(\mathbf{s},\mathbf{b})$ , so that the greedy policy improvement matches the target, then the matching loss is minimised by setting $\varepsilon = 0$ . If greedy policy improvement does not correspond, so that acting greedily w.r.t. $q_{\mathbf{x}}$ does not match the target, then the matching loss is minimised by increasing entropy, i.e. increasing $\varepsilon$ . The meta-objective is defined in terms of $\mathbf{x}$ as it does not require differentiation through the update-rule. + +'Value'-matching A disadvantage of policy matching is that it provides a sparse learning signal: $\varepsilon$ is increased when the target-policy differs from the current policy and decreased otherwise. The magnitude of the change depends solely on the current value of $\varepsilon$ . It is therefore desirable to evaluate alternative matching functions that provide a richer signal. Inspired by value-matching for actor-critic agents, we construct a form of 'value' matching by taking the expectation over $q_{\mathbf{x}}$ under the induced stochastic policy, $u_{\mathbf{x}}(\mathbf{s}) \coloneqq \sum_{\mathbf{a} \in \mathcal{A}} \pi_{\mathbf{x}}(\mathbf{a} \mid \mathbf{s}) q_{\mathbf{x}}(\mathbf{s}, \mathbf{a})$ . The resulting matching objective is given by + +$$ +\mu^ {u} (\tilde {\mathbf {x}}, \mathbf {x}) = \mathbb {E} \left[ \left(u _ {\tilde {\mathbf {x}}} (\mathbf {s}) - u _ {\mathbf {x}} (\mathbf {s}; \mathbf {t}, \mathbf {w})\right) ^ {2} \right]. +$$ + +While the objective is structurally similar to value-matching, $u$ does not correspond to well-defined value-function since $q_{\mathbf{x}}$ is not an estimate of the action-value of $\pi_{\mathbf{x}}$ . + +Detailed results Figure 12 shows the learned $\varepsilon$ -schedules for different meta-learning horizons: if $L$ is large enough, the agent is able to increase exploration when the task switches and quickly recovers a near-optimal policy for the current cycle. Figure 12 further shows that a richer matching function, in this case in the form of 'value' matching, can yield improved performance. + +Table 1: Two-colors hyper-parameters + +
Actor-critic
Inner Learner
OptimiserSGD
Learning rate0.1
Batch size16 (losses are averaged)
γ0.99
μKL(πx||πx')
MLP hidden layers (v, π)2
MLP feature size (v, π)256
Activation FunctionReLU
Meta-learner
OptimiserAdam
ε (Adam)10-4
β1, β20.9, 0.999
Learning rate candidates{3·10-6, 10-5, 3·10-5, 10-4, 3·10-4}
MLP hidden layers (ε)1
MLP feature size (ε)32
Activation FunctionReLU
Output ActivationSigmoid
Q(λ)
Inner Learner
OptimiserAdam
Learning Rate3·10-5
ε (Adam)10-4
β1, β20.9, 0.999
Gradient EMA0.9
λ0.7
γ0.99
MLP hidden layers (Q)2
MLP feature size (Q)256
Activation FunctionReLU
Meta-learner
Learning Rate10-4
ε (Adam)10-4
β1, β20.9, 0.999
Gradient EMA0.9
MLP hidden layers (ε)1
MLP feature size (ε)32
Activation FunctionReLU
Output ActivationSigmoid
+ +# C ATARI + +Setup Hyper-parameters are reported in Table 2. We follow the original IMPALA (Espeholt et al., 2018) setup, but do not down-sample or gray-scale frames from the environment. Following previous works (Xu et al., 2018; Zahavy et al., 2020), we treat each game level as a separate learning problem; the agent is randomly initialized at the start of each learning run and meta-learning is conducted online during learning on a single task, see Algorithm 6. We evaluate final performance between 190-200 million frames. All experiments are conducted with 3 independent runs under different seeds. Each of the 57 levels in the Atari suite is a unique environment with distinct visuals and game mechanics. Exploiting this independence, statistical tests of aggregate performance relies on a total sample size per agent of $3 \times 57 = 171$ . + +Agent We use a standard feed-forward agent that received a stack of the 4 most recent frames (Mnih et al., 2013) and outputs a softmax action probability along with a value prediction. The agent is implemented as a deep neural network; we use the IMPALA network architecture without LSTMs, with larger convolution kernels to compensate for more a complex input space, and with a larger conv-to-linear projection. We add experience replay (as per (Schmitt et al., 2020)) to allow multiple steps on the target. All agents use the same number of online samples; unless otherwise stated, they also use the same number of replay samples. We ablate the role of replay data in Appendix C.2. + +STACX The IMPALA agent introduces specific form of importance sampling in the actor critic update and while STACX largely rely on the same importance sampling mechanism, it differs slightly to facilitate the meta-gradient flow. The actor-critic update in STACX is defined by Eq. 4 with the following definitions of $\rho$ and $G$ . Let $\bar{\rho} \geq \bar{c} \in \mathbb{R}_{+}$ be given and let $\nu : S \times \mathcal{A} \to [0,1]$ represent the behaviour policy that generated the rollout. Given $\pi_{\mathbf{x}}$ and $v_{\bar{\mathbf{z}}}$ , define the Leaky V-Trace target by + +$$ +\eta_ {t} := \pi_ {\mathbf {x}} (\mathbf {a} _ {t} \mid \mathbf {s} _ {t}) / \nu (\mathbf {a} _ {t} \mid \mathbf {s} _ {t}) +$$ + +$$ +\rho_ {t} := \alpha_ {\rho} \min \left\{\eta_ {t}, \bar {\rho} \right\} + (1 - \alpha_ {\rho}) \eta_ {t} +$$ + +$$ +c _ {i} := \lambda \left(\alpha_ {c} \min \left\{\eta_ {i}, \bar {c} \right\} + (1 - \alpha_ {c}) \eta_ {i}\right) +$$ + +$$ +\delta_ {t} := \rho_ {t} (\gamma v _ {\bar {\mathbf {z}}} (\mathbf {s} _ {t + 1}) + r _ {t + 1} - v _ {\bar {\mathbf {z}}} (\mathbf {s} _ {t})) +$$ + +$$ +G _ {t} ^ {(n)} = v _ {\overline {{\mathbf {z}}}} (\mathbf {s} _ {t}) + \sum_ {i = 0} ^ {(n - 1)} \gamma^ {i} \left(\prod_ {j = 0} ^ {i - 1} c _ {t + j}\right) \delta_ {t + i}, +$$ + +with $\alpha_{\rho} \geq \alpha_{c}$ . Note that—assuming $\bar{c} \geq 1$ and $\lambda = 1$ in the on-policy setting this reduces to the n-step return since $\eta_t = 1$ , so $\rho_t = c_t = 1$ . The original v-trace target sets $\alpha_{\rho} = \alpha_{c} = 1$ . + +STACX defines the main "task" as a tuple $(\pi^0, v^0, f(\cdot, \mathbf{w}_0))$ , consisting of a policy, critic, and an actor-critic objective (Eq. 4) under Leaky V-trace correction with meta-parameters $\mathbf{w}_0$ . Auxiliary tasks are analogously defined tuples $(\pi^i, v^i, f(\cdot, \mathbf{w}_i)), i \geq 1$ . All policies and critics share the same feature extractor but differ in a separate MLP for each $\pi^i$ and $v^i$ . The objectives differ in their hyper-parameters, with all hyper-parameters being meta-learning. Auxiliary policies are not used for acting; only the main policy $\pi^0$ interacts with the environment. The objective used to update the agent's parameters is the sum of all tasks (each task is weighted through $\epsilon_{\mathrm{PG}}, \epsilon_{\mathrm{EN}}, \epsilon_{\mathrm{TD}}$ ). The objective used for the MG update is the original IMPALA objective under fixed hyper-parameters $\mathbf{p}$ (see Meta-Optimisation in Table 2). Updates to agent parameters and meta-parameters happen simultaneously on rollouts $\tau$ . Concretely, let $\mathbf{m}$ denote parameters of the feature extractor, with $(\mathbf{x}_i, \mathbf{z}_i)$ denoting parameters of task $i$ 's policy MLP and critic MLP. Let $\mathbf{u}_i := (\mathbf{m}, \mathbf{x}_i, \mathbf{z}_i)$ denote parameters of $(\pi^i, v^i)$ , with $\mathbf{u} := (\mathbf{m}, \mathbf{x}_0, \mathbf{z}_0, \dots, \mathbf{x}_n, \mathbf{z}_n)$ . Let $\mathbf{w} = (\mathbf{w}_0, \dots, \mathbf{w}_n)$ and denote by $\mathbf{h}$ auxiliary vectors of the optimiser. Given (a batch of) rollout(s) $\tau$ , the STACX update is given by + +$$ +\left(\mathbf {u} ^ {(1)}, \mathbf {h} _ {u} ^ {(1)}\right) = \operatorname {R M S P r o p} \left(\mathbf {u}, \mathbf {h} _ {u}, \mathbf {g} _ {u}\right) +$$ + +$$ +\mathbf {g} _ {u} = \nabla_ {u} \sum_ {i = 1} ^ {n} f _ {\tau} \left(\mathbf {u} _ {i}; \mathbf {w} _ {i}\right) +$$ + +$$ +\left(\mathbf {w} ^ {(1)}, \mathbf {h} _ {w} ^ {(1)}\right) = \mathrm {A d a m} \left(\mathbf {w}, \mathbf {h} _ {w}, \mathbf {g} _ {w}\right) +$$ + +$$ +\mathbf {g} _ {w} = \nabla_ {w} f _ {\tau} \big (\mathbf {u} _ {0} ^ {(1)} (\mathbf {w}); \mathbf {p} \big). +$$ + +BMG We use the same setup, architecture, and hyper-parameters for BMG as for STACX unless otherwise noted; the central difference is the computation of $\mathbf{g}_w$ . For $L = 1$ , we compute the + +bootstrapped meta-gradient under $\mu_{\tau}$ on data $\tau$ by + +$$ +\mathbf {g} _ {w} = \nabla_ {w} \mu_ {\tau} \left(\tilde {\mathbf {u}} _ {0}, \mathbf {u} _ {0} ^ {(1)} (\mathbf {w})\right), \quad \text {w h e r e} \quad \left(\tilde {\mathbf {u}} _ {0, -}\right) = \operatorname {R M S P r o p} \left(\mathbf {u} _ {0} ^ {(1)}, \mathbf {h} _ {u} ^ {(1)}, \nabla_ {u} f _ {\tau} \left(\mathbf {u} _ {0} ^ {(1)}; \mathbf {p}\right)\right). +$$ + +Note that the target uses the same gradient $\nabla_{u}f(\mathbf{u}_{0}^{(1)};\mathbf{p})$ as the outer objective in STACX; hence, BMG does not use additional gradient information or additional data for $L = 1$ . The only extra computation is the element-wise update required to compute $\tilde{\mathbf{u}}_0$ and the computation of the matching loss. We discuss computational considerations in Appendix C.4. For $L > 1$ , we take $L - 1$ step under the meta-learned objective with different replay data in each update. To write this explicitly, let $\tau$ be the rollout data as above. Let $\tilde{\tau}^{(l)}$ denote a separate sample of only replay data used in the $l$ th target update step. For $L > 1$ , the TB is described by the process + +$$ +\begin{array}{l} \left(\tilde {\mathbf {u}} _ {0} ^ {(1)}, \tilde {\mathbf {h}} _ {u} ^ {(1)}\right) = \operatorname {R M S P r o p} \left(\mathbf {u} _ {0} ^ {(1)}, \mathbf {h} _ {u} ^ {(1)}, \mathbf {g} _ {u} ^ {(1)}\right), \\ \mathbf {g} _ {u} ^ {(1)} = \nabla_ {u} \sum_ {i = 1} ^ {n} f _ {\tilde {\tau} ^ {(1)}} \left(\mathbf {u} _ {i} ^ {(1)}; \mathbf {w} _ {i}\right) \\ \end{array} +$$ + +$$ +\left(\tilde {\mathbf {u}} _ {0} ^ {(2)}, \tilde {\mathbf {h}} _ {u} ^ {(2)}\right) = \mathrm {R M S P r o p} \left(\tilde {\mathbf {u}} _ {0} ^ {(1)}, \tilde {\mathbf {h}} _ {u} ^ {(1)}, \tilde {\mathbf {g}} _ {u} ^ {(1)}\right), +$$ + +$$ +\tilde {\mathbf {g}} _ {u} ^ {(1)} = \nabla_ {u} \sum_ {i = 1} ^ {n} f _ {\tilde {\tau} ^ {(2)}} \left(\tilde {\mathbf {u}} _ {i} ^ {(1)}; \mathbf {w} _ {i}\right) +$$ + +: + +$$ +\left(\tilde {\mathbf {u}} _ {0, -}\right) = \operatorname {R M S P r o p} \left(\tilde {\mathbf {u}} _ {0} ^ {(L - 1)}, \tilde {\mathbf {h}} _ {u} ^ {(L - 1)}, \tilde {\mathbf {g}} _ {u} ^ {(L - 1)}\right), \qquad \tilde {\mathbf {g}} _ {u} ^ {(L - 1)} = \nabla_ {u} f _ {\tau} \big (\tilde {\mathbf {u}} _ {0} ^ {(L - 1)}, \mathbf {p} \big). +$$ + +Targets and corresponding momentum vectors are discarded upon computing the meta-gradient. This TB corresponds to following the meta-learned update rule for $L - 1$ steps, with a final step under the IMPALA objective. We show in Appendix C.3 that this final step is crucial to stabilise meta-learning. For pseudo-code, see Algorithm 6. + +Matching functions are defined in terms of the rollout $\tau$ and with targets defined in terms of the main task $\mathbf{u}_0$ . Concretely, we define the following objectives: + +$$ +\mu_ {\tau} ^ {\pi} \left(\tilde {\mathbf {u}} _ {0}, \mathbf {u} _ {0} ^ {(1)} (\mathbf {w})\right) = \mathrm {K L} \left(\pi_ {\tilde {\mathbf {u}} _ {0}} \| \pi_ {\mathbf {u} _ {0} ^ {(1)} (\mathbf {w})}\right), +$$ + +$$ +\mu_ {\tau} ^ {v} \left(\tilde {\mathbf {u}} _ {0}, \mathbf {u} _ {0} ^ {(1)} (\mathbf {w})\right) = \mathbb {E} \left[ \left(v _ {\tilde {\mathbf {u}} _ {0}} - v _ {\mathbf {u} _ {0} ^ {(1)} (\mathbf {w})}\right) ^ {2} \right], +$$ + +$$ +\mu_ {\tau} ^ {\pi + v} \left(\tilde {\mathbf {u}} _ {0}, \mathbf {u} _ {0} ^ {(1)} (\mathbf {w})\right) = \mu_ {\tau} ^ {\pi} \left(\tilde {\mathbf {u}} _ {0}, \mathbf {u} _ {0} ^ {(1)} (\mathbf {w})\right) + \lambda \mu_ {\tau} ^ {v} \left(\tilde {\mathbf {u}} _ {0}, \mathbf {u} _ {0} ^ {(1)} (\mathbf {w})\right), \qquad \lambda = 0. 2 5, +$$ + +$$ +\mu^ {L 2} \left(\tilde {\mathbf {u}} _ {0}, \mathbf {u} _ {0} ^ {(1)} (\mathbf {w})\right) = \left\| \tilde {\mathbf {u}} _ {0} - \mathbf {u} _ {0} ^ {(1)} (\mathbf {w}) \right\| _ {2}. +$$ + +
Algorithm 4 Distributed N-step RL actor loop
Require: N▷ Rollout length.
Require: R▷ Centralised replay server.
Require: d▷ Initial state method.
Require: c▷ Parameter sync method.
while True do
if |B| = N then
R ← R ∪ B▷ Send rollout to replay.
x ← c()▷ Sync parameters from learner.
s ← d(s)▷ Optional state reset.
B ← (s)▷ Initialise rollout.
end if
a ~ πx(s)▷ Sample action.
s, r ← env(s, a)▷ Take a step in environment.
B ← B ∪ (a, r, s)▷ Add to rollout.
end while
+ +
Algorithm 5 K-step distributed learning loop
Require: B1, B2, ..., BK▷ KN-step rollouts.
Require: x ∈ Rnx, z ∈ Rnz, w ∈ Rnw▷ Policy, value function, and meta parameters.
for k = 1, 2, ..., K do
(x, z) ← φ((x, z), Bk, w)▷ Inner update step.
end for
return x, z
+ +
Algorithm 6 Distributed RL with BMG
Require: N, K, L, M▷ Rollout length, meta-update length, bootstrap length, parallel actors.
Require: x ∈ Rnx, z ∈ Rnz, w ∈ Rnw▷ Policy, value function, and meta parameters.
u ← (x, z)
Initialise R replay buffer▷ Collects N-step trajectories B from actors.
Initialise M asynchronous actors▷ Run concurrently, Algorithm 4.
while True do
{B(k)}K+Lk=1 ∼ R▷ Sample K rollouts from replay.
u(K) ← InnerLoop(u, w, {B(k)}Kk=1)▷ K-step inner loop, Algorithm 5.
u(K+L-1) ← InnerLoop(u(K), w, {B(l)}L-1k)▷ L-1-step bootstrap, Algorithm 5.
ū ← u(K+L-1) - α∇uℓ(u(K+L-1), B(K+L))▷ Gradient step on objective ℓ.
w ← w - β∇wμ(ū, u(K)(w))▷ BMG outer step.
u ← uK▷ Optional: continue from K+L-1 update.
Send parameters x from learner to actors.
end while
+ +Table 2: Atari hyper-parameters + +
ALE (Bellemare et al., 2013)
Frame dimensions (H, W, D)160, 210, 3
Frame poolingNone
Frame grayscaleingNone
Num. stacked frames4
Num. action repeats4
Sticky actions (Machado et al., 2018)False
Reward clipping[-1, 1]
γ = 0 loss of lifeTrue
Max episode length108 000 frames
Initial noop actions30
IMPALA Network (Espeholt et al., 2018)
Convolutional layers4
Channel depths64, 128, 128, 64
Kernel size3
Kernel stride1
Pool size3
Pool stride2
Padding'SAME'
Residual blocks per layer2
Conv-to-linear feature size512
STACX (Zahavy et al., 2020)
Auxiliary tasks2
MLP hidden layers2
MLP feature size256
Max entropy loss value0.9
Optimisation
Unroll length20
Batch size18
of which from replay12
of which is online data6
Replay buffer size10 000
LASER (Schmitt et al., 2020) KL-threshold2
OptimiserRMSProp
Initial learning rate10-4
Learning rate decay interval200 000 frames
Learning rate decay rateLinear to 0
Momentum decay0.99
Epsilon10-4
Gradient clipping, max norm0.3
Meta-Optimisation
γ, λ, ρ, c, α0.995, 1, 1, 1, 1
εPG, εEN, εTD1, 0.01, 0.25
OptimiserAdam
Learning rate10-3
β1, β20.9, 0.999
Epsilon10-4
Gradient clipping, max norm0.3
+ +# C.1 BMG DECOMPOSITION + +In this section, we decompose the BMG agent to understand where observed gains come from. To do so, we begin by noting that—by virtue of Eq. 3—STACX is a special case of BMG under $\mu (\tilde{\mathbf{u}},\mathbf{u}_0^{(1)}(\mathbf{w})) = \| \tilde{\mathbf{u}} -\mathbf{u}_0^{(1)}(\mathbf{w})\| _2^2$ with $\tilde{\mathbf{u}} = \mathbf{u}_0^{(1)} - \frac{1}{2}\nabla_u f_\tau (\mathbf{u}_0^{(1)};\mathbf{p})$ . That is to say, if the target is generated by a pure SGD step and the matching function is the squared L2 objective. We will refer to this configurations as SGD, L2. From this baseline—i.e. STACX—a minimal change is to retain the matching function but use RMSProp to generate the target. We refer to this configuration as RMS, L2. From Corollary 1, we should suspect that correcting for curvature should improve performance. While RMSProp is not a representation of the metric $G$ in the analysis, it is nevertheless providing some form of curvature correction. The matching function can then be used for further corrections. + +Figure 13 shows that changing the target update rule from SGD to RMSProp, thereby correcting for curvature, yields a substantial gain. This supports our main claim that BMG can control for curvature and thereby facilitate meta-optimisation. Using the squared Euclidean distance in + +parameter space (akin to (Nichol et al., 2018; Flennerhag et al., 2019)) is surprisingly effective. However, it exhibits substantial volatility and is prone to crashing (c.f. Figure 15); changing the matching function to policy KL-divergence stabilizes meta-optimisation. Pure policy-matching leaves the role of the critic—i.e. policy evaluation—implicit. Having an accurate value function approximation is important to obtain high-quality policy gradients. It is therefore unsurprising that adding value matching provides a statistically significant improvement. Finally, we find that BMG can also mitigate myopia by extending the meta-learning horizon, in our TB by unrolling the meta-learned update rule for $L - 1$ steps. This is roughly as important as correcting for curvature, in terms of the relative performance gain. + +To further support these findings, we estimate the effect BMG has on ill-conditioning and metagradient variance on three games where both STACX and BMG exhibit stable learning (to avoid confounding factors of non-stationary dynamics): Kangaroo, Star Gunner, and Ms Pacman. While + +![](images/85f5544f581066048acde3f2f5ffc2fd19f0cecf366bdb846edbcb9f84459113.jpg) +Figure 13: Atari BMG decomposition. We report human normalized score (median, quantiles, $\frac{1}{2}\mathrm{IQR}$ ) between 190-200M frames over all 57 games, with 3 independent runs for each configuration. + +Table 3: Meta-gradient cosine similarity and variance per-game at 50-150M frames over 3 seeds. + +
KLKL & VL2STACX
Kangaroo
Cosine similarity0.19 (0.02)0.11 (0.01)0.001 (1e-4)0.009 (0.01)
Meta-gradient variance0.05 (0.01)0.002 (1e-4)2.3e-9 (4e-9)6.4e-4 (7e-4)
Meta-gradient norm variance49684744
Ms Pacman
Cosine similarity0.11 (0.006)0.03 (0.006)0.002 (4e-4)-0.005 (0.01)
Meta-gradient variance90 (12)0.8 (0.2)9.6e-7 (2e-8)0.9 (0.2)
Meta-gradient norm variance2.17.94.22.1
Star Gunner
Cosine similarity0.13 (0.008)0.07 (0.001)0.003 (5e-4)0.002 (0.02)
Meta-gradient variance4.2 (1.1)1.5 (2.3)1.9e-7 (3e-7)0.06 (0.03)
Meta-gradient norm variance6.16.611.76.5
+ +the Hessian of the meta-gradient is intractable, an immediate effect of ill-conditioning is gradient interference, which we can estimate through cosine similarity between consecutive meta-gradients. We estimate meta-gradient variance on a per-batch basis. Table 3 presents mean statistics between 50M and 150M frames, with standard deviation over 3 seeds. BMG achieves a meta-gradient cosine similarity that is generally 2 orders of magnitude larger than that of STACX. It also explicitly demonstrates that using the KL divergence as matching function results in better curvature relative to using the L2 distance. The variance of the meta-gradient is larger for BMG than for STACX (under KL). This is due to intrinsically different gradient magnitudes. To make comparisons, we report the gradient norm to gradient variance ratio, which roughly indicates signal to noise. We note that in this metric, BMG tends to be on par with or lower than that of STACX. + +# C.2 EFFECT OF REPLAY + +We find that extending the meta-learning horizon by taking more steps on the target leads to large performance improvements. To obtain these improvements, we find that it is critical to re-sample replay data for each step, as opposed to re-using the same data for each rollout. Figure 14 demonstrates this for $L = 4$ on MsPacman. This can be explained by noting that reusing data allows the target to overfit to the current batch. By re-sampling replay data we obtain a more faithful simulation of what the meta-learned update rule would produce in $L - 1$ steps. + +The amount of replay data is a confounding factor in the meta-objective. We stress that the agent parameter update is always the same in any experiment we run. That is to say, the additional use of replay data only affects the computation of the meta-objective. To control for this additional data in the meta-objective, we consider a subset of games where we see large improvements from $L > 1$ . We run STACX and BMG with $L = 1$ , but increase the amount of replay data used to compute the meta-objective to match the total amount of replay data used in the meta-objective when $L = 4$ . This changes the online-to-replay ratio from 6:12 to 6:48 in the meta objective. + +![](images/06b646f0aee5d540788550d2e8c7ccbb325dfb00ba86d63a426e31dc742f67ad.jpg) +Figure 14: Atari, learning curves on MS Pacman for KL & V. $L = 4, R$ computes the $L$ th step on only replay data. $L = 4, w$ uses the meta-learned objective for the $L$ th step (with $L$ th step computed on online and replay data, as per default). Shading depicts standard deviation across 3 seeds. + +Figure 15 shows that the additional replay data is not responsible for the performance improvements we see for $L = 4$ . In fact, we find that increasing the amount of replay data in the meta-objective exacerbates off-policy issues and leads to reduced performance. It is striking that BMG can make use of this extra off-policy data. Recall that we use only off-policy replay data to take the first $L - 1$ steps on the target, and use the original online-to-replay ratio (6:12) in the $L$ th step. In Figure 14, we test the effect of using only replay for all $L$ steps and find that having online data in the $L$ th update step is critical. These results indicate that BMG can make effective use of replay by simulating the effect of the meta-learned update rule on off-policy data and correct for potential bias using online data. + +# C.3 L vs K + +Given that increasing $L$ yields substantial gains in performance, it is interesting to compare against increasing $K$ , the number of agent parameter updates to backpropagate through. For fair comparison, we use an identical setup as for $L > 1$ , in the sense that we use new replay data for each of the initial $K - 1$ steps, while we use the default rollout $\tau$ for the $K$ th step. Hence, the data characteristics for $K > 1$ are identical to those of $L > 1$ . + +However, an important difference arises because each update step takes $K$ steps on the agent's parameters. This means that—within the 200 million frames budget, $K > 1$ has a computational advantage as it is able to do more updates to the agent's parameters. With that said, these additional $K - 1$ updates use replay data only. + +![](images/f3861ac7c535f60e01fbce4e7fbe0e2a5c3ebe1af8790a73a23421cb948bc08f.jpg) +Figure 15: Atari experience replay ablation. We report episode returns, normalized to be in the range [0, max return] for each game for ease of comparison. Shading depicts standard deviation across 3 seeds. $D$ denotes default BMG configuration for $L = 1$ , with $L = 4$ analogously defined. $R$ denotes $L = 1$ , but with additional replay in the meta-objective to match the amount of replay used in $L = 4$ . + +Figure 16 demonstrates that increasing $K$ is fundamentally different from increasing $L$ . We generally observe a loss of performance, again due to interference from replay. This suggests that target bootstrapping allows a fundamentally different way of extending the meta-learning horizon. In particular, these results suggest that meta-bootstrapping allows us to use relatively poor-quality (as evidence by $K > 1$ ) approximations to long-term consequences of the meta-learned update rule without impairing the agent's actual parameter update. Finally, there are substantial computational gains from increasing the meta-learning horizon via $L$ rather than $K$ (Figure 17). + +# C.4 COMPUTATIONAL CHARACTERISTICS + +IMPALA's distributed setup is implemented on a single machine with 56 CPU cores and 8 TPU (Jouppi et al., 2017) cores. 2 TPU cores are used to act in 48 environments asynchronously in parallel, sending rollouts to a replay buffer that a centralized learner use to update agent parameters and meta-parameters. Gradient computations are distributed along the batch dimension across the remaining 6 TPU cores. All Atari experiments use this setup; training for 200 millions frames takes 24 hours. + +Figure 17 describes the computational properties of STACX and BMG as a function of the number of agent parameters and the meta-learning horizon, $H$ . For STACX, the meta-learning horizon is defined by the number of update steps to backpropagate through, $K$ . For BMG, we test one version which holds $L = 1$ fixed and varies $K$ , as in for STACX, and one version which holds $K = 1$ fixed and varies $L$ . To control for network size, we vary the number of channels in the convolutions of the network. We use a base of channels per layer, $x = (16,32,32,16)$ , that we multiply by a factor 1, 2, 4. Thus we consider networks with kernel channels $1x = (16,32,32,16)$ , $2x = (32,64,64,32)$ , and $4x = (64,128,128,64)$ . Our main agent uses a network size (Table 2) equal to $4x$ . We found that larger networks would not fit into memory when $K > 1$ . + +First, consider the effect of increasing $K$ (with $L = 1$ for BMG). For the small network $(1x)$ , BMG is roughly on par with STACX for all values of $K$ considered. However, BMG exhibits poorer scaling + +![](images/76b1998d6b6839d331e77a4dcbd2430625bf47a0de35ecfecf17a9aca935b5c8.jpg) +Figure 16: Atari $K$ vs $L$ ablation. We report episode returns, normalized to be in the range [0, max return] for each game for ease of comparison. Shading depicts standard deviation across 3 seeds. $D$ denotes default BMG configuration for $L = 1$ , with $L = 4$ analogously defined. $K = 2$ denotes $L = 1$ , but $K = 2$ steps on agent parameters. + +in network size, owing to the additional update step required to compute the target bootstrap. For $4x$ , our main network configuration, we find that BMG is $20\%$ slower in terms of wall-clock time. Further, we find that neither STACX nor BMG can fit the $4x$ network size in memory when $K = 8$ . + +Second, consider the effect of increasing $L$ with BMG (with $K = 1$ ). For $1x$ , we observe no difference in speed for any $H$ . However, increasing $L$ exhibits a dramatic improvement in scaling for $H > 2$ especially for larger networks. In fact, $L = 4$ exhibits a factor 2 speed-up compared to STACX for $H = 4, 4x$ and is two orders of magnitude faster for $H = 8, 2x$ . + +# C.5 ADDITIONAL RESULTS + +Figure 19 presents per-game results learning curve for main configurations considered in this paper. Table 9 presents mean episode returns per game between 190-200 millions frames for all main + +![](images/6ccb6d06f1aa178eb0828437bea8e7b5340a72a1de673cac637e01423cf0798b.jpg) +Figure 17: Atari: Computational characteristics as a function of network size (see Appendix C.4) and meta-learning horizon $H$ . When $H = K$ , we vary the number of update steps to backpropagate through (with $L = 1$ for BMG). When $H = L$ , we vary the number of target update steps (with $K = 1$ ). Measurements are taken over the first 20 million learning frames on the game Pong. + +configurations. Finally, we consider two variations of BMG in the $L = 1$ regime (Figure 18); one version (NS) re-computes the agent update after updating meta-parameters in a form of trust-region method. The other version (DB) exploits that the target has a taken a further update step and uses the target as new agent parameters. While NS is largely on par, interestingly, DB fails completely. + +# C.6 DATA AND HYPER-PARAMETER SELECTION + +We use the ALE Atari environment, publicly available at https://github.com/mgbellemare/ Arcade-Learning-Environment, licensed under GNU GPL 2.0. Environment hyper-parameters were selected based on prior works (Mnih et al., 2013; Espeholt et al., 2018; Zahavy et al., 2020; Schmitt et al., 2020). Network, optimisation and meta-optimisation hyper-parameters are based on the original STACX implementation and tuned for optimal performance. Our median human normalized score matches published results. For BMG, we did not tune these hyper-parameters, except for $L > 1$ . In this case, we observed that unique replay data in the initial $L - 1$ steps was necessary to yield any benefits. We observed a tendency to crash, and thus reduced the gradient clipping ratio from .3 to .2. For BMG configurations that use both policy and value matching, we tuned the weight on value matching by a grid search over $\{0.25, 0.5, 0.75\}$ on Ms Pacman, Zaxxon, Wizard of Wor, and Seaquest, with 0.25 performing best. + +![](images/63e88932c7beda888157a74ef8f7e554752b5ba1cfab51f9bdaef4579f56b5e0.jpg) +Figure 18: Atari BMG, alternative meta-update strategies. NS re-computes the agent-update the meta-update, akin to a trust-region method. DB uses the bootstrap target as the next agent parameters. Shading depicts standard deviation across 3 seeds. + +# D MULTI-TASK META-LEARNING + +# D.1 PROBLEM FORMULATION + +Let $p(\tau)$ denote a given task distribution, where $\tau \in \mathbb{N}$ indexes a task $f^{\tau}$ . Each task is also associated with distinct learner states $\mathbf{h}_{\tau}$ and task parameters $\mathbf{x}_{\tau}$ , but all task learners use the same meta-learned update rule defined by meta-parameters $\mathbf{w}$ . Hence, the meta-learner's problem is again to learn an update rule, but now in expectation over all learning problems. The MG update (Eq. 1) thus takes the form $\mathbf{w}' = \mathbf{w} - \beta \nabla_{w} \mathbb{E}_{\tau}[f^{\tau}(\mathbf{x}_{\tau}^{(K)}(\mathbf{w}))]$ , where the expectation is with respect to $(f^{\tau}, \mathbf{h}_{\tau}, \mathbf{x}_{\tau})$ and $\mathbf{x}_{\tau}^{(K)}(\mathbf{w})$ is the $K$ -step update on task $\tau$ given $(f^{\tau}, \mathbf{h}_{\tau}, \mathbf{x}_{\tau})$ . Since $p(\tau)$ is independent of $\mathbf{w}$ , this update becomes $\mathbf{w}' = \mathbf{w} - \beta \mathbb{E}_{\tau}[\nabla_{w} f^{\tau}(\mathbf{x}_{\tau}^{(K)}(\mathbf{w}))]$ , i.e. the single-task meta-gradient in Section 3 in expectation over the task distribution. + +With that said, the expectation involves integrating over $(\mathbf{h}_{\tau},\mathbf{x}_{\tau})$ . This distribution is defined differently depending on the problem setup. In few-shot learning, $\mathbf{x}_{\tau}$ and $\mathbf{h}_{\tau}$ are typically a shared initialisations (Finn et al., 2017; Nichol et al., 2018; Flennerhag et al., 2019) and $f^{\tau}$ differ in terms of the data (Vinyals et al., 2016). However, it is possible to view the expectation as a prior distribution over task parameters (Grant et al., 2018; Flennerhag et al., 2020). In online multi-task learning, this expectation often reduces to an expectation over current task-learning states (Rusu et al., 2015; Denevi et al., 2019). + +The BMG update is analogously defined. Given a TB $\xi$ , define the task-specific target $\tilde{\mathbf{x}}_{\tau}$ given $\mathbf{x}_{\tau}^{(K)}$ by $\xi (\mathbf{x}_{\tau}^{(K)})$ . The BMG meta-loss takes the form $\mathbf{w}' = \mathbf{w} - \beta \nabla_w\mathbb{E}_\tau [\mu_\tau (\tilde{\mathbf{x}}_\tau ,\mathbf{x}_\tau^{(K)}(\mathbf{w}))]$ , where $\mu_{\tau}$ is defined on data from task $\tau$ . As with the MG update, as the task distribution is independent of $\mathbf{w}$ , this simplifies to $\mathbf{w}' = \mathbf{w} - \beta \mathbb{E}_\tau [\nabla_w\mu_\tau (\tilde{\mathbf{x}}_\tau ,\mathbf{x}_\tau^{(K)}(\mathbf{w}))]$ , where $\mu_{\tau}$ is the matching loss defined on task data from $\tau$ . Hence, as with MG, the multi-task BMG update is an expectation over the single-task BMG update in Section 3. See Algorithm 7 for a detailed description. + +# D.2 FEW-SHOT MINIIMAGENET + +Setup MiniImagenet (Vinyals et al., 2016; Ravi & Larochelle, 2017) is a sub-sample of the Imagenet dataset (Deng et al., 2009). Specifically, it is a subset of 100 classes sampled randomly from the 1000 classes in the ILSVRC-12 training set, with 600 images for each class. We follow the standard protocol (Ravi & Larochelle, 2017) and split classes into a non-overlapping meta-training, meta-validation, and meta-tests sets with 64, 16, and 20 classes in each, respectively. The dataset is licenced under the MIT licence and the ILSVRC licence. The dataset can be obtained from https://paperswithcode.com/dataset/miniimagenet-1. M-shot-N-way classification tasks are sampled following standard protocol (Vinyals et al., 2016). For each task, $M = 5$ classes are randomly sampled from the train, validation, or test set, respectively. For each class, $K$ observations are randomly sampled without replacement. The task validation set is constructed similarly from a disjoint set of $L = 5$ images per class. We follow the original MAML protocol for meta-training (Finn et al., 2017), taking $K$ task adaptation steps during meta-training and 10 adaptation steps during meta testing. + +We study how the data-efficiency and computational efficiency of the BMG meta-objective compares against that of the MG meta-objective. To this end, for data efficiency, we report the meta-test set performance as we vary the number of meta-batches each algorithm is allowed for meta-training. As more meta-batches mean more meta-tasks, this metric captures how well they leverage additional data. For computational efficiency, we instead report meta-test set performance as a function of total meta-training time. This metric captures computational trade-offs that arise in either method. + +For any computational budget in either regime (i.e. $N$ meta-batches or $T$ hours of training), we report meta-test set performance across 3 seeds for the hyper-configuration with best validation performance (Table 4). This reflects the typical protocol for selecting hyper-parameters, and what each method would attain under a given budget. For both methods, we sweep over the meta-learning rate $\beta$ ; for shorter training runs, a higher meta-learning is critical to quickly converge. This however leads to sub-optimal performance for larger meta-training budgets, where a smaller meta-learning rate can produce better results. The main determinant for computational cost is the number of steps to backpropagate through, $K$ . For BMG, we sweep over $K \in \{1,5\}$ . For MG, we sweep over $K \in \{1,5,10\}$ . We allow $K = 10$ for MAML to ensure fair comparison, as BMG can extend its effective meta-learning horizon through the target bootstrap; we sweep over $L \in \{1,5,10\}$ . Note that the combination of $K$ and $L$ effectively lets BMG interpolate between different computational trade-offs. Standard MG does not have this property, but several first-order approximations have been proposed: we allow the MG approach to switch from a full meta-gradient to either the FOMAML approximation (Finn et al., 2017) or the ANIL approximation (Raghu et al., 2020). + +Model, compute, and shared hyper-parameters We use the standard convolutional model (Vinyals et al., 2016), which is a 4-layer convolutional model followed by a final linear layer. Each convolutional layer is defined by a $3 \times 3$ kernel with 32 channels, strides of 1, with batch normalisation, a ReLU activation and $2 \times 2$ max-pooling. We use the same hyper-parameters of optimisation and meta-optimisation as in the original MAML implementation except as specified in Table 4. Each model is trained on a single machine and runs on a V100 NVIDIA GPU. + +Table 4: Hyper-parameter sweep per computational budget. + +
MAMLBMG
β{0.0001, 0.001}{0.0001, 0.001}
K{1,5,10}{1,5}
L{1,5,10}
μ{KL (x || ·), KL (· || x)}
FOMAML{ True, False}
ANIL{ True, False}
Total2424
+ +Table 6: Effect of BMG on ill-conditioning and meta-gradient variance on 5-way-5-shot MiniImagenet. Estimated meta-gradient cosine similarity $(\theta)$ between consecutive gradients, meta-gradient variance $(\mathbb{V})$ , and meta-gradient norm to variance ratio $(\rho)$ . Standard deviation across 5 independent seeds. + +
KLMAMLBMG
θVρθVρ
110.17 (0.01)0.21 (0.01)0.02 (0.02)0.17 (0.01)0.0002 (5e-6)0.59 (0.03)
50.18 (0.01)0.001 (1e-5)0.23 (0.01)
100.19 (0.01)0.0003 (2e-5)0.36 (0.01)
510.03 (0.01)0.07 (0.009)0.08 (0.03)0.03 (0.005)0.01 (9e-5)0.84 (0.03)
50.04 (0.005)0.001 (5e-5)0.46 (0.02)
100.05 (0.004)0.003 (3e-5)0.18 (0.02)
+ +# D.3 ANALYSIS + +In terms of data-efficiency, Table 7 reports best hyper-parameters for each data budget. For both BMG and MG, we note that small budgets rely on fewer steps to backpropagate through and a higher learning rate. BMG tends to prefer a higher target bootstrap in this regime. MG switches to backpropagation through $K > 1$ sooner than BMG, roughly around 70 000 meta-updates, while BMG switches around 120 000 meta-updates. This accounts for why BMG can achieve higher performance faster, as it can achieve similar performance without backpropagating through more than one update. It is worth noting that as BMG is given larger training budgets, to prevent meta-overfitting, shorter target bootstraps generalize better. We find that other hyper-parameters are not important for overall performance. + +Table 5: Meta-training steps per second for MAML and BMG on 5-way-5-shot MiniImagenet. Standard deviation across 5 seeds in parenthesis. + +
KLH = K + LMAMLBMG
11214.3 (0.4)12.4 (0.5)
56-6.9 (0.3)
1011-4.4 (0.1)
5164.4 (0.06)4.2 (0.04)
510-3.2 (0.03)
1015-2.5 (0.01)
101112.3 (0.01)2.2 (0.01)
515-1.9 (0.01)
1020-1.7 (0.01)
15-151.4 (0.01)-
20-201.1 (0.01)-
+ +In terms of computational efficiency, Table 7 reports best hyper-parameters for each time budget. The pattern here follows a similar trend. MG does better under a lower learning rate already after 4 hours, whereas BMG switches after about 8 hours. This data highlights the dominant role $K$ plays in determining training time. + +We compare wall-clock time per meta-training step for various values of $K$ and $L$ Table 5. In our main configuration, i.e. $K = 5$ , $L = 10$ , BMG achieves a throughput of 2.5 meta-training steps per second, compared to 4.4 for MAML, making BMG $50\%$ slower. In this setting, BMG has an effective meta-learning horizon of 15, whereas MAML has a horizon of 5. For MAML to achieve an effective horizon of 15, it's throughput would be reduced to 1.4, instead making MAML $56\%$ slower than BMG. + +Finally, we conduct a similar analysis as on Atari (Appendix C.1) to study the effect BMG has on ill-conditioning and meta-gradient variance. We estimate ill-conditioning through cosine similarity between consecutive meta-gradients, and meta-gradient variance on a per meta-batch basis. We report mean statistics for the 5-way-5-shot setup between 100 000 and 150 000 meta-gradient steps, with standard deviation over 5 independent seeds, in Table 6. + +Unsurprisingly, MAML and BMG are similar in terms of curvature, as both can have a KL-divergence type of meta-objective. BMG obtains greater cosine similarity as $L$ increases, suggesting that BMG can transfer more information by having a higher temperature in its target. However, BMG exhibits substantially lower meta-gradient variance, and the ratio of meta-gradient norm to variance is an order of magnitude larger. + +Algorithm 7 Supervised multi-task meta-learning with BMG +Require: $K,L$ meta-update length, bootstrap length +Require: $M,N,T$ meta-batch size, inner batch size, meta-training steps. +Require: $\mathbf{x}\in \mathbb{R}^{n_x},\mathbf{w}\in \mathbb{R}^{n_w}$ model and meta parameters. +for $t = 1,2,\ldots ,T$ do $\begin{array}{rlr}{\mathrm{g}}&{\leftarrow 0}&{\mathrm{~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\qquad~\q}\end{array}$ Initialise meta-gradient. +for $i = 1,2,\dots ,M$ do $\begin{array}{rlr}{\mathrm{\Delta}}&{\mathrm{\Delta}}&{\mathrm{\Delta}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}}&{\mathrm{\Delta}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}}&{\mathrm{\Delta}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}}&{\mathrm{\Delta}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}}&{\mathrm{\Delta}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}}&{\mathrm{\Delta}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}^{\prime}}&{\mathrm{\Delta}^{\prime}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}^{\prime}}&{\mathrm{\Delta}^{\prime}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}^{\prime}}&{\mathrm{\Delta}^{\prime}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}^{\prime}}&{\mathrm{\Delta}^{\prime}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}^{\prime}}&{\mathrm{\delta}^{\prime}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}^{\prime}}&{\mathrm{\Delta}^{\prime}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}^{\prime}}&{\mathrm{\Delta}^{\prime}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}^{\prime}}&{\mathrm{\Delta}^{\prime}}\\ {\mathrm{\Delta}}&{\mathrm{\Delta}^{\prime}} & {\mathrm{~~\delta~}^{\prime}}\\ {\mathrm{end for}} & {\mathrm{X}^{(K)}\leftarrow \mathbf{X}} & {\mathrm{~\triangleright~K - step~adaptation.~}}\\ {\mathrm{for} l = 1,2,\dots ,L - 1\mathbf{do}} & {\zeta_{\tau}\sim p_{\text{train}}(\zeta \mid \tau)} & {\triangleright \text{Sample batch of task training data.}\\ {\mathbf{x}_{\tau} = \mathbf{x}_{\tau} + \varphi (\mathbf{x}_{\tau},\zeta_{\tau},\mathbf{w})} & {\triangleright \text{Task adaptation.}}\\ {\text{end for}} & {\zeta_{\tau}\sim p_{\text{test}}(\zeta \mid \tau)} & \\ {\text{if final gradient step then}} & {\tilde{{\bf x}}_{\tau} = {\bf x}_{\tau} - \alpha \nabla_{x}\ell ({\bf x}_{\tau},\zeta_{\tau})} & {\triangleright \text{Assign target.}}\\ {\text{else}} & {\tilde{{\bf x}}_{\tau}\leftarrow {\bf x}_{\tau} + \varphi ({\bf x}_{\tau},\zeta_{\tau},\mathbf{w})} & \\ {\text{end if}} & {\mathbf{g}\leftarrow \mathbf{g} + \nabla_{w}\mu (\tilde{{\bf x}}_{\tau},\mathbf{x}^{(K)}(\mathbf{w}))} & \\ {\text{end for}} & {\mathbf{w}\leftarrow \mathbf{w} - \frac{\beta}{M}\mathbf{g}} & {\triangleright \text{BMG outer step.}}\\ {\text{end for}} & {} & {} \end{array}$ + +Table 7: Data-efficiency: mean meta-test accuracy over 3 seeds for best hyper-parameters per data budget. $\mu = 1$ corresponds to KL $(\tilde{\mathbf{x}}\parallel \cdot)$ and $\mu = 2$ to KL $(\cdot \parallel \tilde{\mathbf{x}})$ . + +
Step (K)βKLμAcc. (%)βKFOMAMLANILAcc. (%)
1010-3110161.410-31FalseTrue61.7
2010-3110161.810-31FalseFalse61.9
3010-3110162.510-310FalseTrue62.3
4010-351163.110-35FalseFalse62.7
5010-351163.510-310FalseTrue62.9
6010-351163.710-31FalseFalse63.0
7010-311263.710-35FalseFalse63.0
8010-351163.710-45FalseFalse63.1
9010-351163.810-35FalseFalse63.3
10010-311263.810-45FalseFalse63.4
11010-311263.910-45FalseFalse63.6
12010-455163.910-45FalseFalse63.6
13010-4510164.010-45FalseFalse63.6
14010-455164.110-45FalseFalse63.6
15010-455164.210-410FalseTrue63.6
16010-455164.310-45FalseFalse63.6
17010-455164.410-45FalseFalse63.7
18010-455164.510-410FalseFalse63.8
19010-4510164.610-45FalseFalse63.9
20010-3510264.710-410FalseFalse64.0
21010-451164.710-45FalseFalse64.1
22010-455164.710-410FalseFalse64.2
23010-455164.810-45FalseFalse64.2
24010-451264.810-45FalseFalse64.1
25010-455164.910-45FalseFalse64.1
26010-451164.910-45FalseFalse64.0
27010-451164.810-45FalseFalse63.9
28010-451164.810-45FalseFalse63.8
29010-451164.710-45FalseFalse63.8
30010-455164.710-45FalseFalse63.8
+ +Table 8: Computational-efficiency: mean meta-test accuracy over 3 seeds for best hyper-parameters per time budget. $\mu = 1$ corresponds to KL $(\tilde{\mathbf{x}}\parallel \cdot)$ and $\mu = 2$ to KL $(\cdot \parallel \tilde{\mathbf{x}})$ . + +
Time (h)βKLμAcc. (%)βKFOMAMLANILAcc. (%)
110-311263.510-31FalseFalse63.0
210-311263.610-310FalseTrue63.0
310-351163.710-35FalseFalse63.0
410-351163.810-45FalseTrue63.1
410-351163.810-41FalseTrue63.4
510-351163.810-45FalseFalse63.5
610-3510163.810-45FalseFalse63.6
710-451163.810-45FalseFalse63.6
810-351163.810-45FalseFalse63.6
910-451163.910-45FalseFalse63.6
1010-451164.210-45FalseFalse63.7
1110-455164.310-45FalseFalse63.8
1210-455164.510-45FalseFalse63.9
1310-455164.610-45FalseFalse63.9
1410-451264.710-45FalseFalse63.8
1510-451164.810-45FalseFalse63.4
1610-451164.810-310FalseFalse63.2
1710-451164.810-410FalseFalse63.3
1810-4510164.810-410FalseFalse63.5
1910-455164.810-410FalseFalse63.6
2010-455164.710-410FalseFalse63.8
2110-4510164.710-410FalseFalse63.9
2110-4510164.710-410FalseFalse63.8
2210-455164.710-410FalseFalse63.9
2310-4510164.710-410FalseFalse63.8
2410-4510164.7
+ +![](images/152e2e32235f2d5d44ac570f95bbf16c863b0a3a31012929470201c4291f59c0.jpg) +Figure 19: Atari, per-game performance across 3 seeds. Shading depicts standard deviation. + +Table 9: Mean per-game performance between 190-200M frames. + +
KLKL & VKL & V, L=4KL-SL2STACXV
Alien4567744880580673575050692318097964
Amidar4800709975284974769137191896
Assault2033429473330192174728301196484101
Asterix511550439475533385487367679824561786053
Asteroids145337238320289689858522036615609656577
Atlantis831920813772814780806698854441848007648988
Bank Heist5711325013116513291339
Battle Zone73323884078835078941504537835972787
Beam Rider37170516495740941454677266289274397
Berzerk2114629461588218324015231069
Bowling46504246502852
Boxing10010086100100100100
Breakout74283284777482771716
Centipede5370325427305588492915695503944783478895
Chopper Command83077293486383809073601211274846788341350
Crazy Climber233445212229265729199150229496182617126353
Defender3934573740124218943640536919334445355152
Demon Attack132508133109133571132529133469130741129863
Double Dunk22232321232423
Enduro234923492350236023652592187
Fishing Derby41636841526259
Freeway10302531301833
Frostbite88203895399555471347725221669
Gopher116010116037122459921851227908709411920
Gravitar27170974825935942746944
Hero60896485515243256044516313555920235
Ice Hockey515204151920
Jamesbond22129259513015725766182002612323263
Kangaroo12200125571317419401323531828722
Krull107509768105101115610502104808899
Kung Fu Master51038587325435454559636326782354584
Montezuma Revenge0000000
Ms Pacman2592622876282792626727564126472759
Name This Game31203318633683830912323442461612583
Phoenix529404542998658082407520440821370270247854
Pitfall0-100000
Pong21212121212121
Private Eye165144981306710068
Qbert8721437135723203004775197272643901
Riverraid12951513275132300912671771274767126418
Road Runner24037761710521596170024245886219134773
Robotank64667165646165
Seaquest684870218982925616738147717443653
Skiing-10023-8988-9797-8988-9893-10504-13312
Solaris2120218221881858219423262202
Space Invaders35762540464079011314493333487515424
Star Gunner5883776634777908335874113951029844843561
Surround99109939
Tennis23242321241924
Time Pilot94746609186862695854934664993240127
Tutankham282268291280288101205
Up N Down34212130374138178010939220271531558817252
Venture0000000
Video Pinball23025247986139909450521248585244122077100
Wizard Of Wor21597457314980622936108174785424250
Yars Revenge770012867344080613203139865611365177169
Zaxxon44280494485901136261497345695235494
\ No newline at end of file diff --git a/bootstrappedmetalearning/images.zip b/bootstrappedmetalearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f6ce6f814f4c5683c82155f9f14fe0c221c66c9f --- /dev/null +++ b/bootstrappedmetalearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47a73eec9551329e20004215efde006d4c67281809def5c04d01ab892429049c +size 2675595 diff --git a/bootstrappedmetalearning/layout.json b/bootstrappedmetalearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2da121ff133df30768b356611ac72d03c2e4dba8 --- /dev/null +++ b/bootstrappedmetalearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a4d38d190ceae7c40a059566be8bd32a5196f1d7575d877060f6310bed23db0b +size 1334765 diff --git a/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_content_list.json b/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c85cece3e0c97d4edceb75c041a42097b208e6d5 --- /dev/null +++ b/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd76f1700fbc7df7b7e2349705f662dd69891786da848e20a83d58285f17c8aa +size 151060 diff --git a/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_model.json b/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3c3447015947bb935e98ac1459ad8b4494397332 --- /dev/null +++ b/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4fecaf272a4aa4064a203abd815498916cc32d32666eeaa89d0e02fded0d4414 +size 173169 diff --git a/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_origin.pdf b/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..aaf11233ad6ac61fb5d9a8ac84147c4a28a3b032 --- /dev/null +++ b/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47db4fbff55a4c24928633e78dd8c88c5e7723ac89cff1996b7348f483ea116b +size 5216764 diff --git a/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/full.md b/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2d387381f902d6f10525714b5762b501a2575a9d --- /dev/null +++ b/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/full.md @@ -0,0 +1,723 @@ +# COMPARING DISTRIBUTIONS BY MEASURING DIFFERENCES THAT AFFECT DECISION MAKING + +Shengjia Zhao*, Abhishek Sinha*, Yutong He*, Aidan Perreault, Jiaming Song, Stefano Ermon + +Department of Computer Science + +Stanford University + +{sjzhao,a7b23,kellyyhe,aperr,tsong,ermon}@stanford.edu + +# ABSTRACT + +Measuring the discrepancy between two probability distributions is a fundamental problem in machine learning and statistics. We propose a new class of discrepancies based on the optimal loss for a decision task – two distributions are different if the optimal decision loss is higher on their mixture than on each individual distribution. By suitably choosing the decision task, this generalizes the Jensen-Shannon divergence and the maximum mean discrepancy family. We apply our approach to two-sample tests, and on various benchmarks, we achieve superior test power compared to competing methods. In addition, a modeler can directly specify their preferences when comparing distributions through the decision loss. We apply this property to understanding the effects of climate change on different economic activities and selecting features targeting different decision tasks. + +# 1 INTRODUCTION + +Quantifying the difference between two probability distributions is a fundamental problem in machine learning. Modelers choose different types of discrepancies (or probability divergences) to encode their prior knowledge about which aspects are relevant to evaluate the difference. Integral probability metrics (IPMs, Müller (1997)) and $f$ -divergences (Csiszár, 1964) are widely used discrepancies in machine learning. IPMs, such as the Wasserstein distance, maximum mean discrepancy (MMD) (Rao, 1982; Burbea & Rao, 1984; Gretton et al., 2012), are based on the idea that if two distributions are identical, any function should have the same expectation under both distributions. IPMs are used to define training objectives for generative models (Arjovsky et al., 2017), perform independence tests (Doran et al., 2014), robust optimization (Esfahani & Kuhn, 2018) among many other applications. $f$ -divergences, such as the KL divergence and the Jensen Shannon divergence, are based on the idea that if two distributions are identical, they assign the same likelihood to every point. One can then define a discrepancy based on how different the likelihood ratio is from one. KL divergence underlies some of the most commonly used training objectives for both supervised and unsupervised machine learning algorithms, such as cross entropy loss. + +We propose a third category of divergences called H-divergences that overlaps with but also extends the set of integral probability metrics or the set $f$ -divergences. Intuitively, H-divergence compares two distributions in terms of the optimal loss for a certain decision task. This optimal loss corresponds to a generalized notion of entropy (DeGroot et al., 1962). Instead of measuring the best average code length of any encoding scheme (Shannon entropy), the generalized entropy uses arbitrary loss function (rather than code length) and set of actions (rather than encoding schemes), and is defined as the best expected loss among the set of actions. In particular, given two distribution $p$ and $q$ , we compare the generalized entropy of the mixture distribution $(p + q) / 2$ and the generalized entropy of $p$ and $q$ individually. Intuitively, if $p$ and $q$ are different, it is more difficult to minimize expected loss under the mixture distribution $(p + q) / 2$ , and hence the mixture distribution should have higher generalized entropy; if $p$ and $q$ are identical, then the mixture distribution is identical to $p$ or $q$ , and hence should have the same generalized entropy. + +Our divergence strictly generalizes the maximum mean discrepancy family and the Jensen Shannon divergence, which can be obtained with specific choices of the loss function. We illustrate this via + +![](images/f0ef2a4edf95fa376bcfa749c8cb28810db82526098fef08b351e9ecc3ae6e0c.jpg) +Figure 1: Relationship between H-divergence (this paper) and existing divergences. The Jensen Shannon divergence is an $f$ -divergence but not an IPM; the MMD is an IPM but not always an $f$ -divergence; both are H-divergences. There are H-divergences that are not $f$ -divergences or IPMs. + +the Venn diagram in Figure 1. Our formulation allows us to choose alternative losses to leverage inductive biases and machine learning models from different problem domains. For example, if we choose the generalized entropy as the maximum log likelihood of deep generative models, we are able to leverage recent progress in modeling high dimensional images. + +We demonstrate the effectiveness of H-divergence in two sample tests, i.e. to decide whether two sets of samples come from the same distribution or not. A test based on a probability discrepancy declares two sets of samples different if their discrepancy exceeds some threshold. We use H-divergences based on generalized entropy defined by the log likelihood of off-the-shelf generative models. Compared to state-of-the-art tests based on MMD with deep kernels (Liu et al., 2020), tests based on the H-divergence achieve better test power (given identical type I error) on a large set of benchmarks. + +More importantly, scientists and policy makers are often interested not only in if two distributions are different, but how two distributions are different and whether the differences affect decision making. Typical divergence measures (such as KL) or two sample tests only quantify if two distributions are different, while we show that H-divergence is a useful tool for quantifying how distributions are different with three application examples: studying the effect of climate change, feature selection, and sample quality evaluation. In each of these examples, we compare different aspects of the distributions by choosing specific decision loss functions. For example, climate change (Figure 3) might impact agriculture in a region but not energy production, or vice versa. By choosing suitable loss functions (related to agriculture, energy, etc) we can quantify and test if the change in climate distribution impact different economic activities. + +# 2 BACKGROUND + +# 2.1 PROBABILITY DIVERGENCES + +Let $\mathcal{X}$ denote a finite set or a finite dimensional vector space, and $\mathcal{P}(\mathcal{X})$ denote the set of probability distributions on $\mathcal{X}$ that have a density. We consider the problem of defining a probability divergence between any two distributions in $\mathcal{P}(\mathcal{X})$ , where a probability divergence is any function $D:\mathcal{P}(\mathcal{X})\times \mathcal{P}(\mathcal{X})\to \mathbb{R}$ that satisfies $D(p\| q)\geq 0,D(p\| p) = 0,\forall p,q\in \mathcal{P}(\mathcal{X})$ . We call the divergence $D$ "strict" if $D(p\| q) > 0\forall p\neq q$ , and "non-strict" otherwise. In this paper we consider both types of divergences. + +Integral Probability Metrics Let $\mathcal{F}$ denote a set of functions $\mathcal{X} \to \mathbb{R}$ . An integral probability metric is defined as $\mathrm{IPM}_{\mathcal{F}}(p\| q) = \sup_{f \in \mathcal{F}} |\mathbb{E}_p[f(X)] - \mathbb{E}_q[f(X)]|$ . Several important divergences belong to integral probability metrics. Examples include the Wasserstein distance, where $\mathcal{F}$ is the set of 1-Lipschitz functions; the total variation distance, where $\mathcal{F}$ is the set of functions $\mathcal{X} \to [-1,1]$ . The maximum mean discrepancy (MMD) (Rao, 1982; Burbea & Rao, 1984; Gretton et al., 2012) chooses a kernel function $k: \mathcal{X} \times \mathcal{X} \to \mathbb{R}_+$ and is defined by + +$$ +\mathrm {M M D} ^ {2} (p \| q) = \mathbb {E} _ {p, p} k (X, Y) + \mathbb {E} _ {q, q} k (X, Y) - 2 \mathbb {E} _ {p, q} k (X, Y) +$$ + +MMD is an IPM where $\mathcal{F}$ is the unit norm functions in the reproducing kernel Hilbert space (RKHS) associated with the kernel $k$ . + +$f$ -Divergences Given any convex continuous function $f: \mathbb{R}_+ \to \mathbb{R}$ such that $f(1) = 0$ , the $f$ -Divergence is defined as (assuming densities exist) $D_f(p\|q) = \mathbb{E}_q[f(p(X)/q(X))]$ . Examples include the KL divergence, where $f: t \mapsto t\log t$ and the Jensen Shannon divergence, where $f: t \mapsto (t + 1)\log\left(\frac{2}{t + 1}\right) + t\log t$ . + +# 2.2 H-ENTROPY + +For any action space $\mathcal{A}$ and any loss function $\ell : \mathcal{X} \times \mathcal{A} \to \mathbb{R}$ , the H-entropy (DeGroot et al., 1962; DeGroot, 2005; Grünwald et al., 2004) is defined as + +$$ +H _ {\ell} (p) = \inf _ {a \in \mathcal {A}} \mathbb {E} _ {p} [ \ell (X, a) ] +$$ + +In words, $H$ -entropy is the Bayes optimal loss of a decision maker who must select some action $a$ not for a particular $x$ , but in expectation for a random $x$ drawn from $p(x)$ . H-entropy generalizes several important notions of uncertainty. Examples include: Shannon Entropy, where $\mathcal{A}$ as the set of probabilities $\mathcal{P}(\mathcal{X})$ , and $\ell(x, a) = -\log a(x)$ ; Variance where $\mathcal{A} = \mathcal{X}$ , and $\ell(x, a) = \|x - a\|_2^2$ ; Predictive $\mathcal{V}$ -entropy, where $\mathcal{A} \subset \mathcal{P}(\mathcal{X})$ is some subset of distributions, and $\ell(x, a) = -\log a(x)$ (Xu et al., 2020). + +A key property we will use is that $H$ -entropy is concave (DeGroot et al., 1962). + +Lemma 1. For any choice of $\ell :\mathcal{X}\times \mathcal{A}\to \mathbb{R}$ $H_{\ell}$ is a concave function. + +This Lemma can be proved by observing that inf is a concave function: it is always better to pick an optimal action for $p$ and $q$ separately rather than a single one for both. + +$$ +\begin{array}{l} H _ {\ell} \big (\alpha p + (1 - \alpha) q \big) = \inf _ {a} \left(\alpha \mathbb {E} _ {p} [ \ell (X, a) ] + (1 - \alpha) \mathbb {E} _ {q} [ \ell (X, a) ]\right) \\ \geq \alpha \inf _ {a} \mathbb {E} _ {p} [ \ell (X, a) ] + (1 - \alpha) \inf _ {a} \mathbb {E} _ {q} [ \ell (X, a) ] = \alpha H _ {\ell} (p) + (1 - \alpha) H _ {\ell} (q) \\ \end{array} +$$ + +This Lemma reflects why $H_{\ell}$ can be thought of as a measurement of entropy or uncertainty. If the distribution is more uncertain (e.g. a mixture of $p$ and $q$ , rather than $p$ or $q$ separately) then decisions made under higher uncertainty will suffer a higher loss. + +# 3 DEFINITION AND THEORETICAL PROPERTIES + +# 3.1 H-JENSEN SHANNON DIVERGENCE + +As a warm up, we present a special case of our divergence. + +Definition 1 (H-Jensen Shannon divergence). + +$$ +D _ {\ell} ^ {\mathrm {J S}} (p, q) = H _ {\ell} \left(\frac {p + q}{2}\right) - \frac {1}{2} \left(H _ {\ell} (p) + H _ {\ell} (q)\right) \tag {1} +$$ + +$D_{\ell}^{\mathrm{JS}}$ is always non-negative because H-entropy is concave (Lemma 1), and clearly $D_{\ell}^{\mathrm{JS}}(p,q) = 0$ whenever $p = q$ . Therefore, $D_{\ell}^{\mathrm{JS}}$ is a valid probability divergence. In particular, if we choose $H_{\ell}$ as the Shannon entropy, Definition 1 recovers the Jensen Shannon divergence. Other special loss function choices can recover definitions in (Burbea & Rao, 1982). + +# 3.2 GENERAL H-DIVERGENCE + +In addition to the H-Jensen Shannon divergence, there are other functions based on the H-entropy that satisfy the requirements of a divergence. For example, + +$$ +D _ {\ell} ^ {\operatorname {M i n}} = H _ {\ell} \left(\frac {p + q}{2}\right) - \min \left(H _ {\ell} (p), H _ {\ell} (q)\right) \tag {2} +$$ + +is also a valid divergence (this will be proved later as a special case of Lemma 2). We can define a general set of divergences that includes the above two divergences with the following definition: + +Definition 2 (H-divergence). For two distributions $p, q$ on $\mathcal{X}$ , given any continuous function $\phi : \mathbb{R}^2 \to \mathbb{R}$ such that $\phi(\theta, \lambda) > 0$ whenever $\theta + \lambda > 0$ and $\phi(0, 0) = 0$ , define + +$$ +D _ {\ell} ^ {\phi} (p \| q) = \phi \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p), H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q)\right) +$$ + +Intuitively $H_{\ell}\left(\frac{p + q}{2}\right) - H_{\ell}(p)$ and $H_{\ell}\left(\frac{p + q}{2}\right) - H_{\ell}(q)$ measure how much more difficult it is to minimize loss on the mixture distribution $(p + q) / 2$ than on $p$ and $q$ respectively. $\phi$ is a general class of functions that map these differences into a scalar divergence, while satisfying some desirable properties described in the next section. + +The following proposition shows that the H-divergence generalizes the previous definitions (1) and (2). Therefore, any property of H-divergence is inherited by e.g. the H-Jensen Shannon divergence. + +Proposition 1. If $\phi (\theta ,\lambda) = \frac{\theta + \lambda}{2}$ then $D_{\ell}^{\phi}(p,q)$ is the H-Jensen Shannon divergence in Eq.(1). If $\phi (\theta ,\lambda) = \max (\theta ,\lambda)$ then $D_{\ell}^{\phi}(p,q)$ is the H-Min divergence in Eq.(2). + +# 3.3 PROPERTIES OF THE H-DIVERGENCE + +We first verify that $D_{\ell}^{\phi}$ is indeed a (strict or non-strict) probability divergence. + +Lemma 2. For any choice of $\ell$ and for any choice of $\phi$ that satisfy Definition 2, $D_{\ell}^{\phi}$ is non-negative and $D_{\ell}^{\phi}(p,q) = 0$ whenever $p = q$ . Furthermore, $D_{\ell}^{\phi}$ is symmetric whenever $\phi$ is symmetric. + +Depending on the choice of $\ell$ , $H$ -divergence may or may not be strict (i.e. whenever $p \neq q$ , $D(p\|q) > 0$ ). The following proposition characterizes conditions for a strict divergence. + +Proposition 2 (Strict Divergence). For any choice of $\phi$ the following are equivalent 1) $\forall p\neq q$ $D_{\ell}(p\| q) > 0$ 2) The $H$ entropy $H_{\ell}(p)\coloneqq \inf_{a}\mathbb{E}_{p}[\ell (X,a)]$ is strictly convex in p. 3) $\forall p\neq q$ arg $\inf_{a}\mathbb{E}_{p}[\ell (X,a)]\cap \arg \inf_{a}\mathbb{E}_{q}[\ell (X,a)] = \emptyset .$ + +In particular, this proposition can be used to characterize all strict H-divergences, because the set of all losses $\ell$ that induces strict H-entropy functions $H_{\ell}$ can be characterized by Fenchel duality (Duchi et al., 2018). + +One important property of the H-divergence is that two distributions have non-zero divergence if and only if they have different optimal actions, i.e. the optimal solutions for their respective H-entropy are different. This is shown in the following proposition (proof in Appendix A). + +Proposition 3. $D_{\ell}^{\phi}(p\| q) > 0$ if and only if $\arg \inf_{a}\mathbb{E}_{p}[\ell (X,a)]\cap \arg \inf_{a}\mathbb{E}_{q}[\ell (X,a)] = \emptyset$ + +Intuitively, $D_{\ell}^{\phi}$ only takes into account differences between distributions that lead to different optimal action choices. This property allows us to incorporate prior domain knowledge. By choosing $\mathcal{A}$ and $\ell$ we can specify which differences between distributions lead to different optimal actions, and which differences do not. For example, we can choose $\mathcal{A}$ as a set of generative models (e.g., mixture of Gaussians) and $\ell(x, a)$ as the negative log likelihood of $x$ under generative model $a$ . If under two distributions we end up learning the same generative model (by maximizing log likelihood), the H-divergence between them is zero. + +# 3.4 RELATIONSHIP TO MMD + +An important special case of the H-divergence is the set of squared Maximum Mean Discrepencies (MMD), as shown by the following theorem: + +Theorem 1. The set of $H$ -Jensen Shannon Divergences is strictly larger than the MMD² distances. + +To prove this theorem, we show that for each choice of kernel $k: \mathcal{X} \times \mathcal{X} \to \mathbb{R}$ , there exists an action space $\mathcal{A}$ and loss $\ell$ such that the corresponding squared MMD distance and H-divergence are the same (see proof in Appendix A). In particular, this equivalence can be achieved by choosing $\mathcal{A}$ to be the RKHS $\mathcal{H}$ of $k(\cdot, \cdot)$ , and $\ell(x, a) = 4\|k(x, \cdot) - a\|_{\mathcal{H}}^2$ . Inclusion is strict because the Jensen Shannon divergence is a H-Jensen Shannon Divergence but not a squared MMD distance. + +# 3.5 ESTIMATION AND CONVERGENCE + +Many machine learning tasks can be reduced to the problem of estimating the divergence between two distributions given samples. Specifically, suppose we are provided with a set of $m$ i.i.d. samples $\hat{p}_m = (x_1,\dots ,x_m)$ drawn from distribution $p$ and $\hat{q}_m = (x_1',\dots ,x_m')$ drawn from distribution $q$ , and would like to obtain an estimate of $D_{\ell}^{\phi}(p\| q)$ based on the samples. Here, $\hat{p}_m$ and $\hat{q}_m$ denote empirical distributions drawn from $p$ and $q$ respectively. In this section we propose an empirical estimator for the H-divergence and show that it has favorable convergence properties. + +Let $\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m)$ be the empirical (random) estimator for $D_{\ell}^{\phi}(p\| q)$ defined by + +$$ +\hat {D} _ {\ell} ^ {\phi} \left(\hat {p} _ {m} \| \hat {q} _ {m}\right) = \phi \left(\inf _ {a} \frac {1}{m} \sum_ {i = 1} ^ {m} \ell \left(x _ {i} ^ {\prime \prime}, a\right) - \inf _ {a} \frac {1}{m} \sum_ {i = 1} ^ {m} \ell \left(x _ {i}, a\right), \inf _ {a} \frac {1}{m} \sum_ {i = 1} ^ {m} \ell \left(x _ {i} ^ {\prime \prime}, a\right) - \inf _ {a} \frac {1}{m} \sum_ {i = 1} ^ {m} \ell \left(x _ {i} ^ {\prime}, a\right)\right) +$$ + +where $x_{i}^{\prime \prime} = x_{i}b_{i} + x_{i}^{\prime}(1 - b_{i})$ and $b_{i}$ are i.i.d uniformly sampled from $\{0,1\}$ , so that $x_{i}^{\prime \prime}$ is a sample from the mixture distribution $(p + q) / 2$ of size $m$ . + +Using $x_{i}^{\prime \prime}$ as defined above is crucial for the convergence properties we will prove in Theorem 2. It might be tempting to replace the term $\frac{1}{m}\sum_{i = 1}^{m}\ell (x_{i}^{\prime \prime},a)$ with $\frac{1}{2m}\sum_{i = 1}^{m}(\ell (x_i,a) + \ell (x_i',a))$ to use all the available samples. However, optimizing the action based on a finite set of samples (instead of in expectation) is prone to overfitting, and introduces bias. Intuitively, using $m$ samples $(x_{i}^{\prime \prime})$ ensures the bias for the mixture is comparable to that of $p$ and $q$ . Without this, Theorem 2 is no longer true, and empirical performance also degrades. + +Before presenting the convergence results, we first must define several assumptions that make convergence possible. In particular, we are going to assume that the loss function $\ell$ is $C$ -bounded, i.e. there exists some $C$ such that $0 \leq \ell(x, a) \leq C, \forall a, x$ . This assumption seemingly excludes important special cases such as the Jensen-Shannon divergence (which is associated with the unbounded log loss). However, we show in the appendix that the Jensen-Shannon divergence cannot be consistently estimated in general, hence correctly excluded by our theorem. One practical solution is to clip the log likelihood, which is the approach adopted in (Song & Ermon, 2019) for improved divergence estimation (for a similar KL divergence estimation problem). + +In addition, we assume that $\phi$ is 1-Lipschitz under the $\infty$ -norm, i.e. $|\phi(\theta + d\theta, \lambda + d\lambda) - \phi(\theta, \lambda)| \leq \max(d\theta, d\lambda), \forall \theta, \lambda, d\theta, d\lambda \in \mathbb{R}$ . Both $\phi(\theta, \lambda) = \frac{\theta + \lambda}{2}$ and $\phi(\theta, \lambda) = \max\{\theta + \lambda\}$ are 1-Lipschitz under the $\infty$ -norm. This is a mild assumption because if $\phi$ is not 1-Lipschitz we can rescale $\phi$ to make it 1-Lipschitz. Finally, define the Radamacher complexity + +$$ +\mathcal {R} _ {m} ^ {p} (\ell) = \mathbb {E} _ {X _ {i} \sim p, \epsilon_ {i} \sim \mathrm {U n i f o r m} (\{- 1, 1 \})} \left[ \sup _ {a \in \mathcal {A}} \frac {1}{m} \sum_ {i = 1} ^ {m} \epsilon_ {i} \ell (X _ {i}, a) \right] +$$ + +We define $\mathcal{R}_m^q (\ell)$ analogously. Based on these assumptions and definitions we can bound the difference between $\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m)$ and $D_{\ell}^{\phi}(p\| q)$ . + +Theorem 2. If $\ell$ is $C$ -bounded, and $\phi$ is 1-Lipschitz under the $\infty$ -norm, for any choice of distribution $p, q \in \mathcal{P}(\mathcal{X})$ and $t > 0$ we have + +1. $\operatorname*{Pr}[\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m)\geq t]\leq 4e^{-\frac{t^2m}{2C^2}}$ if $p = q$ +2. $\operatorname*{Pr}\left[\left|\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m) - D_{\ell}^{\phi}(p\| q)\right|\geq 4\max (\mathcal{R}_m^p (\ell),\mathcal{R}_m^q (\ell)) + t\right]\leq 4e^{-\frac{t^2m}{2C^2}}$ + +Corollary 1. $\sqrt{\operatorname{Var}[\hat{D}_{\ell}^{\phi}(\hat{p}_m\|\hat{q}_m)]}\leq 4\max (\mathcal{R}_m^p (\ell),\mathcal{R}_m^q (\ell)) + \sqrt{2C^2 / m}$ + +For proof see Appendix A. Note that when $p = q$ , the convergence of $\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m)$ does not depend on the Radamacher complexity of $\ell$ , and converges to 0 very quickly. When $p \neq q$ the estimator $\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m)$ is still consistent (under regularity assumptions) + +Corollary 2. [Consistency] Under the condition of Theorem 2, if additionally either 1. $\mathcal{A}$ is a finite set 2. $\mathcal{A}$ is a bounded subset of $\mathbb{R}^d$ for some $d\in \mathbb{N}$ and $\ell$ is Lipschitz w.r.t. $a$ , then almost surely $\lim_{m\to \infty}\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m) = D_{\ell}^{\phi}(p\| q)$ . + +For both cases in Corollary 2 the Radamacher complexity $\mathcal{R}_m^p (\ell)$ goes to zero (as sample size $m\to \infty$ ) at a rate of $O(1 / \sqrt{m})$ . In other words we can conclude that the estimation error in Theorem 2 is bounded by $O(1 / \sqrt{m})$ and the variance of the estimator is also bounded by $O(1 / \sqrt{m})$ when the sample size $m\rightarrow \infty$ + +# 4 EXPERIMENT: TWO SAMPLE TEST + +The first application is to design more powerful two sample tests. We aim to show that H-divergence allows us to leverage inductive biases for each data type (e.g. image, bio, text) by choosing suitable actions $\mathcal{A}$ and loss $\ell$ , which leads to improved test power. + +# 4.1 TWO SAMPLE TEST + +For the task of two sample test, we would like to decide if two sets of samples are drawn from the same distribution or not. Specifically, given two sets of samples $\hat{p}_m \coloneqq (x_1, \dots, x_m) \stackrel{\mathrm{i.i.d.}}{\sim} p$ and $\hat{q}_m \coloneqq (x_1', \dots, x_m') \stackrel{\mathrm{i.i.d.}}{\sim} q$ we would like to decide if $p = q$ . Typical approaches estimate a divergence $\hat{D}(\hat{p}_m \| \hat{q}_m)$ and output $p \neq q$ if the divergence exceeds some threshold. + +There are two types of errors: Type I error happens when the algorithm incorrectly outputs $p \neq q$ ; the probability of type I errors is called the significance level. Type II error happens when the algorithm incorrectly outputs $p = q$ ; the probability of not making a Type II error is called the test power (higher is better). Note that both the significance level and the test power are relative to distributions $p$ and $q$ . + +We follow the typical setup where we guarantee a certain significance level while empirically measuring the test power. In particular, the significance level can be guaranteed with a permutation test (Ernst et al., 2004). In a permutation test, in addition to the original set of samples $\hat{p}_m$ and $\hat{q}_m$ , we also uniformly randomly swap elements between $\hat{p}_m$ and $\hat{q}_m$ , and sample multiple randomly swapped datasets $(\hat{p}_m^1, \hat{q}_m^1)$ , $(\hat{p}_m^2, \hat{q}_m^2)$ , $\cdots$ . The testing algorithm outputs $p \neq q$ if $\hat{D}(\hat{p}_m \| \hat{q}_m)$ is in the top $\alpha$ -quantile among $\{\hat{D}(\hat{p}_m^1 \| \hat{q}_m^1), \hat{D}(\hat{p}_m^2 \| \hat{q}_m^2), \cdots\}$ . Permutation test guarantees the significance level (i.e. low Type I error) because if $p = q$ then swapping elements between $\hat{p}_m$ and $\hat{q}_m$ should not change its distribution, so each pair $(\hat{p}_m, \hat{q}_m)$ , $(\hat{p}_m^1, \hat{q}_m^1)$ , $\cdots$ should have the same distribution. Therefore, $\hat{D}(\hat{p}_m \| \hat{q}_m)$ happens to be in the top $\alpha$ -quantile with at most $\alpha$ probability. Note that the significance level guarantee does not rely on accurate estimation of H-divergence in Theorem 2 (accurate H-divergence estimation is still important because the test power does depends on it). + +When the choice of $D^{\ell}$ is not a strict divergence (See Proposition 2) we may falsely conclude $p = q$ ( $D(p\|q) = 0$ ) when in reality $p \neq q$ . This is true but inconsequential in finite data scenarios. With finite data, it is generally impossible to guarantee the test power (i.e. bounding the probability of concluding $p = q$ when in reality $p \neq q$ for any $p, q$ ) and prior literature do not provide such guarantees. Hence our guarantee is no weaker than prior two sample test literature. + +# 4.2 EXPERIMENT SETUP + +Baselines We compare our proposed approach with six other divergences. All methods are based on the permutation test explained in Section 4.1. MMD-D (Liu et al., 2020) measures the MMD distance with a deep kernel, while MMD-O (Gretton et al., 2012) measures the MMD distance with a Gaussian kernel. Mean embedding (ME) and smoothed characteristic functions (SCF) (Chwialkowski et al., 2015; Jitkrittum et al., 2016) are distances based on the difference in Gaussian kernel mean embedding at a set of optimized points, or a set of optimized frequencies. C2STS-S & C2ST-L (Lopez-Paz & Oquab, 2017; Cheng & Cloninger, 2019) use a classifier's accuracy distinguishing between the two distributions. + +Comparison Metrics and Setup All methods have the same significance level (which is provably equal to $\alpha = 0.05$ because of the permutation test), therefore we only consider the test power. We follow Liu et al. (2020) and consider four datasets: Blob (Liu et al., 2020), HDGM (Liu et al., + +2020), HIGGS (Adam-Boxdarios et al., 2014) and MNIST (LeCun & Cortes, 2010). Our method and all the baseline methods have hyper-parameters. To ensure fair comparison, we follow the same evaluation setup as (Liu et al., 2020) for all methods. We split each dataset into two equal partitions: a training set to tune hyper-parameters, and a validation set to compute the final test output. + +Implementation Details We choose $\phi(\theta, \lambda) = \left(\frac{\theta^s + \lambda^s}{2}\right)^{1/s}$ for $s > 1$ (which includes the H-Jensen Shannon divergence when $s = 1$ and the H-Min divergence when $s = \infty$ ). We define $l(x, a)$ as the negative log likelihood of $x$ under distribution $a$ , where $a$ is in a certain model family $\mathcal{A}$ . We experiment with mixture of Gaussian distributions, Parzen density estimator and Variational Autoencoder (Kingma & Welling, 2013). Our hyper-parameters consist of the best parameter $s$ and also the best generative model family. Choosing these hyper-parameters might seem cumbersome, but compared to the second best baseline (MMD-D which chooses thousands of deep kernel parameters), we have much fewer hyper-parameters. + +We use $\alpha = 0.05$ in all two-sample test experiments. Each permutation test uses 100 permutations, and we run each test 100 times to compute the test power (i.e. the percent of times it correctly outputs $p \neq q$ ). Finally we plot and report the performance standard deviation by repeating the entire experiment 10 times. + +# 4.3 EXPERIMENT RESULTS + +The average test powers are reported in Figure 4, Figure 2, Table 1 and Table 3. Our approach achieves superior test power across the board. Notably on Higgs we achieve the same test power with $2\mathrm{x}$ fewer samples than the second best test, and on MNIST we can achieve perfect test power even on the smallest sample size evaluated in (Liu et al., 2020). + +Following (Liu et al., 2020) we also evaluate the test power as the dimension of the problem increases (Figure 2). Our test power decreases gracefully as the dimension of the problem increases. We hypothesize that the test power improvements come from leveraging progress in generative model research: for each type of data (e.g. bio, image, text) there has been decades of research finding suitable generative models; we use commonly used generative models (in modern literature) for each data type (e.g. KDE for low dimensional physics/bio data, VAE for simple images). + +![](images/7975a2d132fb1b93a342b77effd0e8d7c9fe49e366d8e4c8305f62afcaa11923.jpg) +Figure 2: Average test power on HDGM dataset. Left: results with the same sample size (4000) and different data dimensions. Right: results with the same sample dimension (10) and different sample sizes. Our method (H-Div, dashed line) achieve better test power for almost every setup. All tests have high test power for low data dimensions, but our method scales better for higher data dimensions. + +![](images/0d63de44c2b38523484bd5499c910d3ed2c32f4c2648591d1a6292e6dcb7f6f9.jpg) + +
NMESCFC2ST-SC2ST-LMMD-OMMD-DH-Div
10000.120±0.0070.095±0.0070.082±0.0150.097±0.0140.132±0.0050.113±0.0130.240±0.020
20000.165±0.0190.130±0.0190.183±0.0260.232±0.0320.291±0.0170.304±0.0120.380±0.040
30000.197±0.0120.142±0.0250.257±0.0490.399±0.0580.376±0.0220.403±0.0500.685±0.015
50000.410±0.0410.261±0.0440.592±0.0370.447±0.0450.659±0.0180.699±0.0470.930±0.010
80000.691±0.0670.467±0.0380.892±0.0290.878±0.0200.923±0.0130.952±0.0241.000±0.000
100000.786±0.0410.603±0.0660.974±0.0070.985±0.0051.000±0.0001.000±0.0001.000±0.000
Avg.0.3950.2830.4970.5060.5640.5790.847
+ +Table 1: Average test power $\pm$ standard error for $N$ samples over the HIGGS dataset. The results on MNIST is similar and presented in Table 3, Appendix B.1. + +![](images/89700e17b6a42e4879a794a549bfd0fd56c453cc6a45e5cf747390ca2333c422.jpg) +Figure 3: Example plots of H-divergence across different geographical locations for losses $\ell$ related to agriculture (left) and energy production (right). Darker color indicates larger H-divergence. Compared to divergences such as KL, H-divergence measures changes relevant to different social and economic activities (by selecting appropriate loss functions $\ell$ ). For example, even though climate change significantly impacts the high latitude or high altitude areas, this change has less relevance to agriculture (because few agriculture activities are possible in these areas). + +![](images/97c0d7f9515e5ea34c44c1a3438c557ac6847f5550c426e7ea14676c90954739.jpg) + +# 5 EXPERIMENT: DECISION DEPENDENT DISCREPANCY MEASUREMENT + +# 5.1 ASSESSING CLIMATE CHANGE + +As an illustrative example of how H-divergence can facilitate decision making, we use climate data and study how climate change affects decision making through the lens of H-divergence. Scientists and policy makers are often interested in how climate change disparately affect different geographical locations. Existing methods (Preston et al., 2011) focus on one aspect of climate change (such as the expected economic loss (Burke et al., 2018)) using tailor-designed analysis, while H-divergence provides a general tool for hypothesis testing and visualization for different aspects of climate change. In our example, we choose suitable loss functions to quantitatively measure aspects of climate change that are relevant to decision making in agriculture and renewable energy production.[2] + +Setup We use the NOAA database which contains daily weather from thousands of weather stations at different geographical locations. For each location, we summarize the weather sequence of each year into a few summary statistics (average yearly temperature, humidity, wind speed and rainy days). We are interested in assessing changes in weather over this period at each location, from the perspective of agriculture and renewable energy activities. Further details of these experiments are in Appendix C.2. + +Example: Agriculture It is known that climate changes affect crop suitability (Lobell et al., 2008). Let $\mathcal{A}$ denote the set of possible crops to plant at each location (e.g. wheat/barley/rice), and $\ell(x, a)$ denote the loss of planting crop $a$ if the yearly weather is $x$ . We estimate the function $\ell$ by matching geographical locations in the FAO crop yield dataset (FAOSTAT et al., 2006) to weather stations in the NOAA database, and learn a function to predict crop yield from weather data with kernel ridge regression. + +The H-divergence has a natural interpretation: a geographical location could either (1) plant the same crop for the entire period 1981-2019 that is optimal for the local climate (i.e. choose $a^* = \arg \min_{a\in \mathcal{A}}\mathbb{E}_{(p + q) / 2}[\ell (X,a)]$ ); (2) plant the optimal crops for 1981-1999 and for 2000-2019 respectively. H divergence measures the additional loss of option (1) compared to option (2). In other words, it is the excess loss of not adapting crop type to climate change. For each geographical location we can compute the H-divergence $D_{\ell}^{\mathrm{JS}}$ for the estimated $\ell$ (plotted in Figure 3 left). + +Example: Energy production Changes in weather also affect electricity generation, since climate change could affect the amount of wind/solar energy available. Let $\mathcal{A}$ denote the number of wind/solar/fossil fuel power plants built, and $\ell(x, a)$ denote the loss (negative utility) when the weather is $x$ . We obtain the function $\ell$ using empirical formulas for energy production (Npower, 2012). The H-divergence for this loss function is shown in Figure 3 (right). Intuitively the H di + +
Loss SelectionSelected Features
Neutraleducation, cap-gain, sex, age, occupation
Upweight low incomeeducation, cap-gain, relationship, marital-status, sex
Upweight high incomeeducation, cap-gain, sex, age, race
+ +Table 2: Features selected by different approaches. With H-Divergence we can select different features that are important in different decision problems. For example, if we assign a high / low penalty to making incorrect prediction for higher income groups, we select a different set of features. + +vergence measures the excess loss of using the same energy generation infrastructure for the entire time period vs. using different infrastructure that adapts to climate change. While this is only an illustrative example, comparing the two maps we see that regions and industries are affected by climate change in different ways – H divergence provides a quantitative framework for this kind of assessments. + +# 5.2 FEATURE SELECTION + +In a feature selection task, we wish to know which input features are most predictive of the label. Feature selection provides information on which features have the biggest influence on the label, and can be used in scientific discovery (Jović et al., 2015; Zhang et al., 2015). + +Off-the-shelf feature selection algorithms often do not take into account problem specific requirements. For example, denote the input features as $X_{1}, \dots, X_{K}$ and label as $Y$ , the mutual information feature selection algorithms estimate the Shannon mutual information $I(X_{i}, Y) \coloneqq \mathrm{KL}(p(X_{i}, Y) \| p(X_{i}) p(Y))$ and select features with largest mutual information. However, scientists and policies makers often need fine-grained control to answer their specific scientific or policy questions. For example, social scientists might want to know which features are more important for high-income as compared to low-income groups (e.g. to understand potential glass ceilings). + +With H-Divergence we can select features with large $D_{\ell}^{\phi}(p(X_i,Y)\| p(X_i)p(Y))$ (i.e. the optimal action is different under the joint $p(X_i,Y)$ and the product of marginals $p(X_i)p(Y)$ ). By choosing different loss functions $\ell$ we can get different feature selection results, each reflecting important features for that decision problem. For example, in Table 2 we show the features selected for the UCI income prediction dataset (Blake, 1998). For this dataset, we choose $\mathcal{A}$ as the set of logistic regression functions, $l(x,a)$ as the cross entropy loss for regression function $a$ on the sample $x$ and $\phi(\theta,\lambda) = \max(\theta,\lambda)$ . If we want to focus on high income groups, we can assign a higher weight to the loss of high income samples, and vice versa. We observe that gender/race is more predictive of income for high-income groups, while relationship or marital status is more predictive of income for lower-income groups. This can help us identify potential inequality or suggest further investigation into the cause of low income and poverty. For example, our results suggest a connection between family and relationship status and poverty, and a connection between gender/race and high income. These connections merit further investigation into the cause and policy remedy. + +# 6 ACKNOWLEDGEMENTS + +SE acknowledges support by NSF(#1651565, #1522054, #1733686), ONR (N000141912145), AFOSR (FA95501910024), ARO (W911NF-21-1-0125) and Sloan Fellowship. + +# REFERENCES + +Claire Adam-Boxdarios, Glen Cowan, Cecile Germain, Isabelle Guyon, Balázs Kégl, and David Rousseau. The higgs boson machine learning challenge. In Proceedings of the 2014 International Conference on High-Energy Physics and Machine Learning - Volume 42, HEPML'14, pp. 19-55. JMLR.org, 2014. +Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214-223. PMLR, 2017. +Peter Bartlett. Theoretical statistics, lecture 13. https://www.stat.berkeley.edu/~bartlett/courses/2013spring-stat210b/notes/13notes.pdf, 2013. + +Catherine Blake. Uci repository of machine learning databases. http://www.ics.uci.edu/~mlearn/MLRepository.html, 1998. +Jacob Burbea and C Radhakrishna Rao. Entropy differential metric, distance and divergence measures in probability spaces: A unified approach. Journal of Multivariate Analysis, 12(4):575-596, 1982. +Jacob Burbea and C Radhakrishna Rao. Differential metrics in probability spaces. Probab. Math. Stat, 3:241-258, 1984. +Marshall Burke, W Matthew Davis, and Noah S Diffenbaugh. Large potential reduction in economic damages under un mitigation targets. Nature, 557(7706):549-553, 2018. +X. Cheng and A. Cloninger. Classification logit two-sample testing by neural networks. ArXiv, abs/1909.11298, 2019. +Kacper P Chwialkowski, Aaditya Ramdas, Dino Sejdinovic, and Arthur Gretton. Fast two-sample testing with analytic representations of probability measures. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems 28, pp. 1981-1989. Curran Associates, Inc., 2015. +Imre Csiszár. Eine informationstheoretische ungleichung und ihre anwendung auf beweis der ergodizitaet von markoffschen ketten. Magyer Tud. Akad. Mat. Kutato Int. Koezl., 8:85-108, 1964. +Morris H DeGroot. Optimal statistical decisions, volume 82. John Wiley & Sons, 2005. +Morris H DeGroot et al. Uncertainty, information, and sequential experiments. The Annals of Mathematical Statistics, 33(2):404-419, 1962. +Gary Doran, Krikamol Muandet, Kun Zhang, and Bernhard Scholkopf. A permutation-based kernel conditional independence test. In UAI, pp. 132-141, 2014. +John Duchi, Khashayar Khosravi, Feng Ruan, et al. Multiclass classification, information, divergence and surrogate risk. Annals of Statistics, 46(6B):3246-3275, 2018. +Michael D Ernst et al. Permutation methods: a basis for exact inference. Statistical Science, 19(4): 676-685, 2004. +Peyman Mohajerin Esfahani and Daniel Kuhn. Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations. Mathematical Programming, 171(1-2):115-166, 2018. +FAO FAOSTAT et al. Fao statistical databases. Rome: Food and Agriculture Organization of the United Nations, 2006. +Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Scholkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723-773, 2012. +Peter D Grünwald, A Philip Dawid, et al. Game theory, maximum entropy, minimum discrepancy and robust bayesian decision theory. the Annals of Statistics, 32(4):1367-1433, 2004. +Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261, 2019. +Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, pp. 6626-6637, 2017. +Wittawat Jitkrittum, Zoltán Szabó, Kacper P Chwialkowski, and Arthur Gretton. Interpretable distribution features with maximum testing power. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 181-189. Curran Associates, Inc., 2016. + +Alan Jović, Karla Brkić, and Nikola Bogunović. A review of feature selection methods with applications. In 2015 38th international convention on information and communication technology, electronics and microelectronics (MIPRO), pp. 1200-1205. Ieee, 2015. +Diederik P Kingma and Max Welling. Auto-Encoding variational bayes. arXiv preprint arXiv:1312.6114v10, December 2013. +Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http://yann.lecun.com/exdb/mnist/. +Feng Liu, Wenkai Xu, Jie Lu, Guangquan Zhang, A. Gretton, and D. Sutherland. Learning deep kernels for non-parametric two-sample tests. ArXiv, abs/2002.09116, 2020. +David B Lobell, Marshall B Burke, Claudia Tebaldi, Michael D Mastrandrea, Walter P Falcon, and Rosamond L Naylor. Prioritizing climate change adaptation needs for food security in 2030. Science, 319(5863):607-610, 2008. +David Lopez-Paz and M. Oquab. Revisiting classifier two-sample tests. arXiv: Machine Learning, 2017. +Tengyu Ma. Machine learning theory. https://github.com/tengyuma/cs229m_notes/blob/main/Winter2021/pdf/02-08-2021.pdf, 2021. +Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, pp. 429-443, 1997. +Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 271-279, 2016. +Npower. Wind turbine power calculations. Mechanical and Electrical Engineering Power Industry, The Royal Academy of Engineering, 2012. +Benjamin L Preston, Emma J Yuen, and Richard M Westaway. Putting vulnerability to climate change on the map: a review of approaches, benefits, and risks. *Sustainability science*, 6(2): 177-202, 2011. +C Radhakrishna Rao. Diversity and dissimilarity coefficients: a unified approach. Theoretical population biology, 21(1):24-43, 1982. +Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. +Jiaming Song and Stefano Ermon. Understanding the limitations of variational mutual information estimators. arXiv preprint arXiv:1910.06222, 2019. +Bharath K Striperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Scholkopf, and Gert RG Lanckriet. On integral probability metrics, $\backslash$ phi-divergences and binary classification. arXiv preprint arXiv:0901.2698, 2009. +Yilun Xu, Shengjia Zhao, Jiaming Song, Russell Stewart, and Stefano Ermon. A theory of usable information under computational constraints. arXiv preprint arXiv:2002.10689, 2020. +Yudong Zhang, Zhengchao Dong, Preetha Phillips, Shuihua Wang, Genlin Ji, Jiquan Yang, and Ti-Fei Yuan. Detection of subjects and brain regions related to alzheimer's disease using 3d mri scans based on eigenbrain and machine learning. Frontiers in computational neuroscience, 9:66, 2015. + +# A PROOFS + +Lemma 2. For any choice of $\ell$ and for any choice of $\phi$ that satisfy Definition 2, $D_{\ell}^{\phi}$ is non-negative and $D_{\ell}^{\phi}(p,q) = 0$ whenever $p = q$ . Furthermore, $D_{\ell}^{\phi}$ is symmetric whenever $\phi$ is symmetric. + +Proof of Lemma 2. For any choice of $p, q$ by the concavity of the H-entropy in Lemma 1 we have + +$$ +H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p) \geq \frac {1}{2} \left(H _ {\ell} (q) - H _ {\ell} (p)\right) +$$ + +$$ +H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q) \geq \frac {1}{2} \left(H _ {\ell} (p) - H _ {\ell} (q)\right) +$$ + +Therefore by summing the two inequalities we have + +$$ +\left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p)\right) + \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q)\right) \geq 0 +$$ + +By the requirement on $\phi$ we know that $\mathcal{D}_{\ell}^{\phi}(p\| q)\geq 0$ . In addition when $p = q$ since $(p + q) / 2 = p = q$ we have $D_{\ell}^{\phi}(p\| q) = \phi (0,0) = 0$ . + +To show it is symmetric, note that + +$$ +\begin{array}{l} D _ {\ell} ^ {\phi} (p \| q) = \phi \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p), H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q)\right) = \phi \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q), H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p)\right) \\ = D _ {\ell} ^ {\phi} (q \| p) \\ \end{array} +$$ + +whenever $\phi$ is symmetric. + +![](images/c79e974e10baa99bc937f712d3602e6259ddf5a290489585b576a177768a5dbe.jpg) + +Proposition 3. $D_{\ell}^{\phi}(p\| q) > 0$ if and only if $\arg \inf_{a}\mathbb{E}_{p}[\ell (X,a)]\cap \arg \inf_{a}\mathbb{E}_{q}[\ell (X,a)] = \emptyset$ + +Proof of Proposition 3. Denote $\mathcal{A}_p^* = \arg \inf_a\mathbb{E}_p[\ell (X,a)]$ and $\mathcal{A}_q^* = \arg \inf_a\mathbb{E}_q[\ell (X,a)]$ . Also compute + +$$ +H _ {\ell} \left(\frac {p + q}{2}\right) = \inf _ {a} \mathbb {E} _ {\frac {p + q}{2}} [ \ell (X, a) ] = \inf _ {a} \left(\frac {1}{2} \mathbb {E} _ {p} [ \ell (X, a) ] + \frac {1}{2} \mathbb {E} _ {q} [ \ell (X, a) ]\right) \tag {3} +$$ + +If $\mathcal{A}_p^* \cap \mathcal{A}_q^* = \emptyset$ , for any action $a'$ such that $\mathbb{E}_p[\ell(X, a')] = H_\ell(p)$ , we must have $a' \in \mathcal{A}_p^*$ so $a' \notin \mathcal{A}_q^*$ and $\mathbb{E}_q[\ell(X, a')] > H_\ell(q)$ . Similar if we choose $a''$ such that $\mathbb{E}_q[\ell(X, a'')] = H_\ell(q)$ we have similarly have $\mathbb{E}_p[\ell(X, a'')] > H_\ell(p)$ . In other words, for any choice of action $a \in \mathcal{A}$ either $a \notin \mathcal{A}_p^*$ and $\mathbb{E}_p[l(X, a)] > H_\ell(p)$ or $a \in \mathcal{A}_p^*$ and $\mathbb{E}_q[l(X, a)] > H_\ell(q)$ . Therefore + +$$ +\inf _ {a} \left(\frac {1}{2} \mathbb {E} _ {p} [ \ell (X, a) ] + \frac {1}{2} \mathbb {E} _ {q} [ \ell (X, a) ]\right) > \frac {1}{2} H _ {\ell} (p) + \frac {1}{2} H _ {\ell} (q) \tag {4} +$$ + +Combining Eq.(3) and Eq.(4) we have + +$$ +\frac {1}{2} \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p)\right) + \frac {1}{2} \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q)\right) > 0 +$$ + +By Definition 2 this implies (for any choice of $\phi$ that satisfies the requirements in Definition 2) that $D_{\ell}^{\phi}(p\| q) > 0$ . + +To prove the converse simply obverse that if $\mathcal{A}_p^*\cap \mathcal{A}_q^*\neq \phi$ , let $a^* \in \mathcal{A}_p^*\cap \mathcal{A}_q^*$ we have $a^{*} = \arg \inf_{a\in \mathcal{A}}\mathbb{E}_{\frac{p + q}{2}}[l(X,a)]$ . This implies that + +$$ +2 H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q) - H _ {\ell} (p) = 2 \mathbb {E} _ {\frac {p + q}{2}} \left[ l \left(X, a ^ {*}\right) \right] - \mathbb {E} _ {q} \left[ l \left(X, a ^ {*}\right) \right] - \mathbb {E} _ {p} \left[ l \left(X, a ^ {*}\right) \right] = 0 +$$ + +By Definition 2 we can conclude that $D_{\ell}^{\phi}(p\| q) = 0$ . + +![](images/cae6a0403fa311f182ab4ee125defae956a464bdd2cbdace9081d26a7d4aaf81.jpg) + +Theorem 1. The set of $H$ -Jensen Shannon Divergences is strictly larger than the MMD² distances. + +Proof of Theorem 1. Let $k(x,y)$ be some kernel on an input space $\mathcal{X}$ , and let $\mathcal{H}$ be the RKHS induced by the kernel. The (squared) MMD distance is defined by + +$$ +\mathrm {M M D} ^ {2} (p, q) = \mathbb {E} _ {X \sim p, Y \sim p} k (X, Y) + \mathbb {E} _ {X \sim q, Y \sim q} k (X, Y) - 2 \mathbb {E} _ {X \sim p, Y \sim q} k (X, Y) +$$ + +which we write more compactly as $\mathrm{MMD}^2 (p,q) = \mathbb{E}_{p,p}k(X,Y) + \mathbb{E}_{q,q}k(X,Y) - 2\mathbb{E}_{p,q}k(X,Y)$ . + +Define $\phi (x,y) = \| k(x,\cdot) - k(y,\cdot)\|_{\mathcal{H}}^2$ . We can rewrite this in the following form: + +$$ +\begin{array}{l} \operatorname {M M D} ^ {2} (p, q) = \mathbb {E} _ {p, q} \phi (X, Y) - \frac {1}{2} \mathbb {E} _ {p, p} \phi (X, Y) - \frac {1}{2} \mathbb {E} _ {q, q} \phi (X, Y) \tag {5} \\ = \mathbb {E} _ {p, q} \| k (X, \cdot) \| _ {\mathcal {H}} ^ {2} + \| k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - 2 k (X, Y) - \frac {1}{2} \mathbb {E} _ {p, p} \| k (X, \cdot) \| _ {\mathcal {H}} ^ {2} + \| k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - 2 k (X, Y) \\ - \frac {1}{2} \mathbb {E} _ {q, q} \| k (X, \cdot) \| _ {\mathcal {H}} ^ {2} + \| k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - 2 k (X, Y) = \mathbb {E} _ {p, p} k (X, Y) + \mathbb {E} _ {q, q} k (X, Y) - 2 \mathbb {E} _ {p, q} k (X, Y) \\ \end{array} +$$ + +We also observe an algebraic relationship for any function $f(x,y)$ such that $f(x,y) = f(y,x)$ for all $x,y$ : + +$$ +\begin{array}{l} \mathbb {E} _ {\frac {p + q}{2}, \frac {p + q}{2}} f (X, Y) = \frac {1}{4} \mathbb {E} _ {p, p} f (X, Y) + \frac {1}{4} \mathbb {E} _ {p, q} f (X, Y) + \frac {1}{4} \mathbb {E} _ {q, p} f (X, Y) + \frac {1}{4} \mathbb {E} _ {q, q} f (X, Y) \\ = \frac {1}{4} \mathbb {E} _ {p, p} f (X, Y) + \frac {1}{4} \mathbb {E} _ {p, q} f (X, Y) + \frac {1}{4} \mathbb {E} _ {q, p} f (Y, X) + \frac {1}{4} \mathbb {E} _ {q, q} f (X, Y) \\ = \frac {1}{4} \mathbb {E} _ {p, p} f (X, Y) + \frac {1}{4} \mathbb {E} _ {q, q} f (X, Y) + \frac {1}{2} \mathbb {E} _ {p, q} f (X, Y) \tag {6} \\ \end{array} +$$ + +Furthermore, we have that + +$$ +\mathbb {E} _ {p, p} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} = 2 \mathbb {E} _ {p} \| k (X, \cdot) - \mathbb {E} _ {p} k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} \tag {7} +$$ + +Based on the above, noting that $\phi (x,y) = \phi (y,x)$ , we can derive + +$$ +\begin{array}{l} \operatorname {M M D} ^ {2} (p, q) = \mathbb {E} _ {p, q} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - \frac {1}{2} \mathbb {E} _ {p, p} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - \frac {1}{2} \mathbb {E} _ {q, q} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} (Eq(5)) \\ = 2 \mathbb {E} _ {\frac {p + q}{2}, \frac {p + q}{2}} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - \mathbb {E} _ {p, p} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - \mathbb {E} _ {q, q} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} (Eq(6)) \\ = 4 \mathbb {E} _ {\frac {p + q}{2}} \| k (X, \cdot) - \mathbb {E} _ {\frac {p + q}{2}} k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - 2 \mathbb {E} _ {p} \| k (X, \cdot) - \mathbb {E} _ {p} k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - 2 \mathbb {E} _ {q} \| k (X, \cdot) - \mathbb {E} _ {q} k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} \quad (Eq(7)) \\ = 4 \inf _ {a \in \mathcal {H}} \mathbb {E} _ {\frac {p + q}{2}} \| k (X, \cdot) - a \| _ {\mathcal {H}} ^ {2} - 2 \inf _ {a \in \mathcal {H}} \mathbb {E} _ {p} \| k (X, \cdot) - a \| _ {\mathcal {H}} ^ {2} - 2 \inf _ {a \in \mathcal {H}} \mathbb {E} _ {q} \| k (X, \cdot) - a \| _ {\mathcal {H}} ^ {2}. \quad \text {m e a n d e f .} \\ \end{array} +$$ + +Therefore we can define a loss $\ell :\mathcal{X}\times \mathcal{H}\to \mathbb{R}$ where + +$$ +\ell (x, a) = 4 \| k (x, \cdot) - a \| _ {\mathcal {H}} ^ {2} +$$ + +Under the new notation we have + +$$ +\begin{array}{l} \mathrm {M M D} ^ {2} (p, q) = \inf _ {a \in \mathcal {H}} \mathbb {E} _ {\frac {p + q}{2}} l (X, a) - \frac {1}{2} \left(\inf _ {a \in \mathcal {H}} \mathbb {E} _ {p} l (X, a) + \inf _ {a \in \mathcal {H}} \mathbb {E} _ {q} l (X, a)\right) \\ = H _ {\ell} \left(\frac {p + q}{2}\right) - \frac {1}{2} \left(H _ {\ell} (p) + H _ {\ell} (q)\right) = D _ {\ell} ^ {\mathrm {J S}} (p \| q) \\ \end{array} +$$ + +Conversely we want to show that not every H-Jensen Shannon divergence is a MMD. For example, take $H_{\ell}$ to be the Shannon entropy, then the corresponding $D_{\ell}^{\mathrm{JS}}$ is the Jensen-Shannon divergence, which is not a MMD. This is because the JS divergence is a type of $f$ -divergence, and the only $f$ -divergence that is also an IPM is total variation distance Sriperumbudur et al. (2009). Therefore, the set of H-Jensen Shannon Divergences is strictly larger than the set of MMDs. $\square$ + +Theorem 2. If $\ell$ is $C$ -bounded, and $\phi$ is 1-Lipschitz under the $\infty$ -norm, for any choice of distribution $p, q \in \mathcal{P}(\mathcal{X})$ and $t > 0$ we have + +1. $\operatorname*{Pr}[\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m)\geq t]\leq 4e^{-\frac{t^2m}{2C^2}}$ if $p = q$ +2. $\operatorname*{Pr}\left[\left|\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m) - D_{\ell}^{\phi}(p\| q)\right|\geq 4\max (\mathcal{R}_m^p (\ell),\mathcal{R}_m^q (\ell)) + t\right]\leq 4e^{-\frac{t^2m}{2C^2}}$ + +Proof of Theorem 2. Let $\hat{p}_m$ be a sequence of $n$ samples $(x_1, \dots, x_m)$ drawn from $p$ , and $\hat{q}_m$ a sequence of $n$ samples $(x_1', \dots, x_m')$ drawn from $q$ . Let $\hat{r}_m$ the sub-sampling mixture $(x_1'', \dots, x_m'')$ defined in Section 3.5 (i.e. $x_i'' = x_i b_i + x_i'(1 - b_i)$ where $b_i$ is uniformly sampled from $\{0, 1\}$ ). We also overload the notation $H_\ell$ by defining $H_\ell(\hat{p}_m) = \inf_{a \in \mathcal{A}} \frac{1}{m} \sum_{i=1}^{m} l(x_i, a)$ , and define $H_\ell(\hat{q}_m)$ , $H_\ell(\hat{r}_m)$ similarly. + +Before proving this theorem we need the following Lemmas + +Lemma 3. Under the assumptions of Theorem 2 + +$$ +\operatorname * {P r} \left[ H _ {\ell} (\hat {p} _ {m}) - \mathbb {E} [ H _ {\ell} (\hat {p} _ {m}) ] \geq t \right] \leq e ^ {- \frac {2 t ^ {2} m}{C ^ {2}}} +$$ + +$$ +\operatorname * {P r} \left[ H _ {\ell} (\hat {p} _ {m}) - \mathbb {E} [ H _ {\ell} (\hat {p} _ {m}) ] \leq - t \right] \leq e ^ {- \frac {2 t ^ {2} m}{C ^ {2}}} +$$ + +Lemma 4. Under the assumptions of Theorem 2 + +$$ +\operatorname * {P r} \left[ | H _ {\ell} (p) - H _ {\ell} (\hat {p} _ {m}) | \geq 2 \mathcal {R} _ {m} (\ell) + t \right] \leq e ^ {- \frac {2 t ^ {2} m}{C ^ {2}}} +$$ + +To prove the first statement of the Theorem, when $p = q$ we can denote $\mu = \mathbb{E}[H_{\ell}(\hat{p}_m)] = \mathbb{E}[H_{\ell}(\hat{q}_m)] = \mathbb{E}[H_{\ell}(\hat{r}_m)]$ , and we have + +$$ +\begin{array}{l} \Pr \left[ \hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) \geq t \right] \\ = \Pr \left[ \phi \left(H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {p} _ {m}\right), H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {q} _ {m}\right)\right) \geq t \right] \\ \leq \Pr \left[ \max \left(H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {p} _ {m}\right), H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {q} _ {m}\right)\right) \geq t \right] \\ \leq \Pr \left[ H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {p} _ {m}\right) \geq t \right] + \Pr \left[ H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {q} _ {m}\right) \geq t \right] \\ \leq \Pr \left[ H _ {\ell} \left(\hat {p} _ {m}\right) - \mu \leq - t / 2 \right] + 2 \Pr \left[ H _ {\ell} \left(\hat {r} _ {m}\right) - \mu \geq t / 2 \right] + \Pr \left[ H _ {\ell} \left(\hat {q} _ {m}\right) - \mu \leq - t / 2 \right] \\ \leq 4 e ^ {- \frac {t ^ {2}}{2 C ^ {2} / m}} \\ \end{array} +$$ + +Def 2 + +$\phi$ 1-Lipschitz + +Union bound + +Union bound + +Lemma 3 + +The third inequality is because if $H_{\ell}(\hat{r}_m) - H_{\ell}(\hat{p}_m) \geq t$ then it must be either $H_{\ell}(\hat{p}_m) - \mu \leq -t/2$ or $H_{\ell}(\hat{r}_m) - \mu \geq t/2$ . Similarly if $H_{\ell}(\hat{r}_m) - H_{\ell}(\hat{q}_m) \geq t$ then it must be either $H_{\ell}(\hat{q}_m) - \mu \leq -t/2$ and $H_{\ell}(\hat{r}_m) - \mu \geq t/2$ . + +To prove the second statement of the Theorem, we observe that + +$$ +\begin{array}{l} \left| \hat {D} _ {\ell} ^ {\phi} (p _ {m} \| q _ {m}) - D _ {\ell} ^ {\phi} (p \| q) \right| \\ = \left| \phi \left(H _ {\ell} (\hat {r} _ {m}) - H _ {\ell} (\hat {p} _ {m}), H _ {\ell} (\hat {r} _ {m}) - H _ {\ell} (\hat {q} _ {m})\right) - \phi \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p), H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q)\right) \right| \\ \leq \max \left(\left| H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {p} _ {m}\right) - H _ {\ell} \left(\frac {p + q}{2}\right) + H _ {\ell} (p) \right|, \left| H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {q} _ {m}\right) - H _ {\ell} \left(\frac {p + q}{2}\right) + H _ {\ell} (q) \right|\right) \\ \leq \max \left(\left| H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\frac {p + q}{2}\right) \right| + \left| H _ {\ell} \left(\hat {p} _ {m}\right) - H _ {\ell} (p) \right|, \left| H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\frac {p + q}{2}\right) \right| + \left| H _ {\ell} \left(\hat {q} _ {m}\right) - H _ {\ell} (q) \right|\right) \\ \end{array} +$$ + +Def 2 + +$\phi$ 1-Lip + +Jensen + +Therefore, the event $|\hat{D}_{\ell}^{\phi}(p_m \| q_m) - D_{\ell}^{\phi}(p \| q)| \geq 4 \max(\mathcal{R}_m^p(\ell), \mathcal{R}_m^q(\ell)) + t$ happens only if at least one of the following events happen + +$$ +\left| H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\frac {p + q}{2}\right) \right| \geq \mathcal {R} _ {m} ^ {p} (\ell) + \mathcal {R} _ {m} ^ {q} (\ell) + t / 2 \geq 2 \mathcal {R} _ {m} ^ {(p + q) / 2} (\ell) + t / 2 \quad \mathcal {R} \text {c o n v e x} +$$ + +$$ +\begin{array}{l} \left| H _ {\ell} \left(\hat {p} _ {m}\right) - H _ {\ell} (p) \right| \geq 2 \mathcal {R} _ {m} (\ell) + t / 2 \\ \left| H _ {\ell} \left(\hat {q} _ {m}\right) - H _ {\ell} (q) \right| \geq 2 \mathcal {R} _ {m} (\ell) + t / 2 \\ \end{array} +$$ + +Based on Lemma 4 each of these events only happen with probability at most $e^{-\frac{t^2m}{2C^2}}$ . Therefore we can conclude by union bound that + +$$ +\operatorname * {P r} [ | \hat {D} _ {\ell} ^ {\phi} (p \| q) - D _ {\ell} ^ {\phi} (p \| q) | \geq 4 \max (\mathcal {R} _ {m} ^ {p} (\ell), \mathcal {R} _ {m} ^ {q} (\ell)) + t ] \leq 4 e ^ {- \frac {t ^ {2} m}{2 C ^ {2}}} +$$ + +Finally we prove the two Lemmas used in the theorem. Lemma 4 is a standard result in the Radamacher complexity literature. For a proof, see e.g. Section 26.1 in (Shalev-Shwartz & Ben-David, 2014). Lemma 3 can also be proved by standard techniques. We provide the proof here. + +Proof of Lemma 3. Consider two sets of samples $x_{1}, \dots, x_{j}, \dots, x_{m}$ and $x_{1}', \dots, x_{j}', \dots, x_{m}'$ where $x_{i} = x_{i}'$ for every index $i = 1, \dots, m$ except index $j$ . + +$$ +\begin{array}{l} \left| \inf _ {a} \frac {1}{m} \sum_ {i} \ell (x _ {i}, a) - \inf _ {a} \frac {1}{m} \sum_ {i} \ell \left(x _ {i} ^ {\prime}, a\right) \right| \leq \sup _ {a} \left| \frac {1}{m} \sum_ {i} \ell (x _ {i}, a) - \frac {1}{m} \sum_ {i} \ell \left(x _ {i} ^ {\prime}, a\right) \right| \\ = \frac {1}{m} \sup _ {a} \left| \ell \left(x _ {j}, a\right) - \ell \left(x _ {j} ^ {\prime}, a\right) \right| \leq \frac {C}{m} \\ \end{array} +$$ + +Then we can conclude by Mcdiarmid inequality that + +$$ +\operatorname * {P r} \left[ \inf _ {a} \frac {1}{m} \sum_ {i} \ell (X _ {i}, a) - \mathbb {E} \left[ \inf _ {a} \frac {1}{m} \sum_ {i} \ell (X _ {i}, a) \right] \geq t \right] \leq e ^ {- \frac {2 t ^ {2}}{C ^ {2} / m}} = e ^ {- \frac {2 t ^ {2} m}{C ^ {2}}} +$$ + +![](images/f3d8d5f9d92ba0fcef08cb5f447374e3cd9681def01e362851b0c33f0441d9fc.jpg) + +![](images/8bc17696763173f438b6f27a3fe5fc7e91bf7c9ad8f8c2b7b3853a39e0a3f623.jpg) + +Corollary 1. $\sqrt{\operatorname{Var}[\hat{D}_{\ell}^{\phi}(\hat{p}_m\|\hat{q}_m)]}\leq 4\max (\mathcal{R}_m^p (\ell),\mathcal{R}_m^q (\ell)) + \sqrt{2C^2 / m}$ + +Proof of Corollary 1. For notation convenience denote $B = 4\max (\mathcal{R}_m^p (\ell),\mathcal{R}_m^q (\ell))$ + +$$ +\begin{array}{l} \mathrm {V a r} [ \hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) ] \\ = \mathbb {E} \left[ \left(\hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) - D _ {\ell} ^ {\phi} (p \| q)\right) ^ {2} \right] \\ = \int_ {t = 0} ^ {\infty} \Pr \left[ \left(\hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) - D _ {\ell} ^ {\phi} (p \| q)\right) ^ {2} \geq t \right] d t \\ = \int_ {t = 0} ^ {\infty} \Pr \left[ \left| \hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) - D _ {\ell} ^ {\phi} (p \| q) \right| \geq \sqrt {t} \right] d t \\ = \int_ {s = 0} ^ {\infty} \Pr \left[ \left| \hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) - D _ {\ell} ^ {\phi} (p \| q) \right| \geq s \right] 2 s d s \quad s = \sqrt {t} \\ \leq \int_ {s = 0} ^ {B} 2 s d s + \int_ {s = 0} ^ {\infty} \Pr \left[ \left| \hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) - D _ {\ell} ^ {\phi} (p \| q) \right| \geq B + s \right] 2 (B + s) d s \\ \leq B ^ {2} + \int_ {s = 0} ^ {\infty} 2 (B + s) e ^ {- \frac {s ^ {2} m}{2 C ^ {2}}} d s \\ \leq B ^ {2} + \int_ {t = 0} ^ {\infty} 2 (B + t \sqrt {\frac {2 C ^ {2}}{m}}) e ^ {- t ^ {2}} \sqrt {\frac {2 C ^ {2}}{m}} d t \quad t = s \sqrt {\frac {m}{2 C ^ {2}}} \\ = B ^ {2} + 2 B \sqrt {\frac {2 C ^ {2}}{m}} \int e ^ {- t ^ {2}} d t + \frac {4 C ^ {2}}{m} \int t e ^ {- t ^ {2}} d t \\ = B ^ {2} + B \sqrt {\frac {2 \pi C ^ {2}}{m}} + \frac {2 C ^ {2}}{m} \leq (B + \sqrt {2 C ^ {2} / m}) ^ {2} \\ \end{array} +$$ + +![](images/0aa45d20c4055c205967d5f00b99c14c9a2e0aae53d11603e1a04188238aa6b2.jpg) + +Corollary 2. [Consistency] Under the condition of Theorem 2, if additionally either 1. $\mathcal{A}$ is a finite set 2. $\mathcal{A}$ is a bounded subset of $\mathbb{R}^d$ for some $d\in \mathbb{N}$ and $\ell$ is Lipschitz w.r.t. $a$ , then almost surely $\lim_{m\to \infty}\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m) = D_{\ell}^{\phi}(p\| q)$ . + +Proof of Corollary 2. We can prove the consistency results from Theorem 2 by observing that the expected Radamacher complexity goes to 0 when $m \to \infty$ . + +The first statement is a simple consequence of Massart's Lemma, (e.g. see Eq.(8.44) in (Ma, 2021)). In particular, because $\mathcal{A}$ is finite we have + +$$ +\mathcal {R} _ {m} ^ {p} (\ell) \leq \sqrt {2 \log | \mathcal {A} | / m} \rightarrow_ {m \rightarrow \infty} 0 +$$ + +To prove the second statement, first observe that because $\mathcal{A}$ is bounded, there must exist some $r\in \mathbb{R}$ such that $\mathcal{A}\subset B_r\coloneqq \{a,\| a\| _2\leq r\}$ . In addition, without loss of generality we can assume that there exists $L\in \mathbb{R}$ such that $\forall x\in \mathcal{X}$ and $a,a^{\prime}\in \mathcal{A}$ + +$$ +| \ell (x, a) - \ell (x, a ^ {\prime}) | \leq L \| a - a ^ {\prime} \| _ {2} +$$ + +There is no loss of generality because in finite dimensions all norms are equivalent, so if $f$ is Lipschitz under any norm, then $\ell$ is Lipschitz under the 2-norm. We can apply the results on Radamacher complexity for smoothly parameterized class proved in (Bartlett, 2013), and conclude that $\lim_{m\to \infty}\mathcal{R}_m^p (\ell) = 0$ . + +# B ADDITIONAL EXPERIMENTAL RESULTS + +# B.1 BLOB DATASET + +![](images/62d6bc30272cdc755fb64426bd02a5bbfe23fca84d37e805b4a9c9b07fbb9e0a.jpg) +Figure 4: Left: Average test power on the Blob dataset for different sample sizes and significance level $\alpha = 0.05$ . Our method (H-Div, dashed line) has significantly better test power, especially for setups with small sample sizes. Right: The same plot with significance level $\alpha = 0.01$ . + +![](images/f7c5011fbc9efa11455b7f5eb5144a6fe08758ae59ae664e8a5b308f276e5c05.jpg) + +# B.2 EVALUATING SAMPLE QUALITY + +![](images/9d7e7cee5c68413d3afea218e6ae56c1e150e674ae097aa827a0de46dc98923d.jpg) + +![](images/0b338101c2abbeaa0fa59f818fb99d8ed426808e2aaaac1f3d8439ff6a16d19d.jpg) +Figure 5: The divergence between corrupted image and original image measured by H-divergence vs. FID. For better comparison we normalize each distance to between $[0,1]$ by a linear transformation. For "speckle" and "impulse" corruption, both divergences are monotonically increasing with more corruption. For "snow" corruption H-divergence is monotonic while FID is not. Other types of corruptions are provided in Appendix B.2. + +![](images/ea5a97441268cb4ba4b3a046363be579bfe51c4a705e04253fc1ccbe58ee3b98.jpg) + +![](images/a6b2e1437dbbd36256a0888018ec6ab0baca6af03cfad2dde192d3a78752c6cd.jpg) + +![](images/3003d01c29c2cf0cd194054afa99e48c62b15653be8c931513e9bcd6c14d6d81.jpg) + +![](images/39f457c0b524ef79ac99051041c35b9fcc2d777089145530b3a38af95a8e9132.jpg) + +![](images/f7ebe05c4647e1507efcb61b66188be95e32b4cc7eef32eb40799cb2d6233fe0.jpg) + +![](images/687947e9131139027cd813de2c2932309b816e3010ed51e5572dcd5c2455fad4.jpg) + +![](images/02254bd13eaea6cc565cbe9ba9e552e7ad5e9117ca592fa883597fd9c21a0e26.jpg) + +![](images/c73a816b273becadd3f4664dfc64361dd2af8e39a606091967596dffc34f56eb.jpg) + +![](images/64bb6015c052a2a39726ccb0041ee7f43cf9e2799bc297f45203b4bccb53b84c.jpg) + +![](images/dc45c4792b6f324a2134c55b1c3c7d9a09e61352419558d1ee4d00eb01eb8ae0.jpg) + +![](images/54f6fd809ea60e312433d00740add147756083f0cc8562717a93d99e0755d20c.jpg) + +![](images/57272da83d0f9c7f0af559e28084b76f3bf611191f031bce471efb6f3dbddb1a.jpg) + +![](images/338fc6d9db2229346d338420cb25dbf4ec45357f3d3905fac16650ac6a2c3b87.jpg) + +The gold standard for evaluating image generative models requires human decision, which is nevertheless expensive. Several surrogate measurements are commonly used, such as the Frechet Inception Distance (FID) (Heusel et al., 2017) or the inception score. Here by formulating such evaluation as an estimation of the discrepancy between the generated and the real images, we can quantify the quality of image samples by calculating the corresponding H-Divergences. + +We choose $\mathcal{A}$ as the set of Gaussian mixture distributions on the inception feature space, $l(x,a)$ as the negative log likelihood of $x$ under distribution $a$ and $\phi (\theta ,\lambda) = \max (\theta ,\lambda)$ . To evaluate the performance, we use the same setup as (Heusel et al., 2017), where we add corruption from (Hendrycks & Dietterich, 2019) to a set of images. Intuitively, adding more corruption degrades the sample quality, so a good measurement of sample quality should assign a lower quality score (higher divergence from clean images). The results are plotted in Figure 5. The remaining plots of other perturbations are in Appendix B.2. Both FID and H-divergence are generally monotonically increasing as we increase the amount of corruption. Our method is slightly better on some perturbations (such as "snow"), where the FID fails to be monotonically increasing, but our method is still monotonic, better aligning with human intuition. + +![](images/63ec5bf08505724914d73f05a7febf5c62c39cffbbe5a372948966cc384b16b8.jpg) + +![](images/8e887da56435b331c5404c0cbd754a74ba61d56ecd2ed2e11ae8492950fcf382.jpg) + +![](images/b019904c0ccaeb56bb206eead110fc798a38facab120d3040835ecf520a830cd.jpg) + +![](images/0b0f51b15f0f5bf53777ff0b4f91c75ab4b22a22e06715e74e1503da40931475.jpg) + +![](images/e08cc9d00c0c09c9be4b22e87db59a60f2f81bf1c95f6852125b966dc49a12a1.jpg) + +![](images/e5146f6229c19ec8f5e2e6822d418e6469662b28b08ba8a037db6dac50a2607c.jpg) + +![](images/a7b49b66c247008a1de0ea19dca2be5f5d9186e44bd0d5ecd8ddde63027fd729.jpg) + +![](images/bd23cef8794a7ee70d64f2f7f7d75b16d3bcf8081e8b5ed7db240c1439b61471.jpg) + +![](images/c9475734ed98bb9fab5cd066fbbfc5ae1b6f21b324c43b70d1da8c7dd2b0a173.jpg) + +![](images/5301fa5151d045bc2f3f8556cee8063389fb183adb5386b4f48072dc102f487a.jpg) + +![](images/eb6e779ab860a158859989182778369548aebd33c4a864cb5f830a50fe89b026.jpg) + +![](images/aa06839caf867da157a8ffa04385177fe99b4d114e2fb36e6b2ec7c348263eb2.jpg) + +![](images/fecddf0ce4e55f9539dd1163685957e434bf07f5c555190f67df2412125d40cb.jpg) +Figure 6: Additional plots that extend Figure 5. + +![](images/257e2c0edb508c25402a86d3caac76b538854b7bf4ea9d7e72de7a813d336d43.jpg) + +![](images/353cd1ba76932763d7b911f3f1ca966035581893a0bf01dea412ad5f0a4eafca.jpg) + +# C ADDITIONAL THEORY AND EXPERIMENT DETAILS + +# C.1 CONNECTION TO OPTIMAL TRANSPORT + +We first show that H-divergence can also have a transportation interpretation. For all the intuitive interpretations we avoid technical difficulty by assuming $\mathcal{X}$ is a finite set, even though all the formulas are applicable when $\mathcal{X}$ has infinite cardinality. + +
NMESCFC2ST-SC2ST-LMMD-OMMD-DH-Div
2000.414±0.0500.107±0.0180.193±0.0370.234±0.0310.188±0.0100.555±0.0441.000±0.000
4000.921±0.0320.152±0.0210.646±0.0390.706±0.0470.363±0.0170.996±0.0041.000±0.000
6001.000±0.0000.294±0.0081.000±0.0000.977±0.0120.619±0.0211.000±0.0001.000±0.000
8001.000±0.0000.317±0.0171.000±0.0001.000±0.0000.797±0.0151.000±0.0001.000±0.000
10001.000±0.0000.346±0.0191.000±0.0001.000±0.0000.894±0.0161.000±0.0001.000±0.000
Avg.0.8670.2430.7680.7830.5720.9101.000
+ +Table 3: Average test power $\pm$ standard error for $N$ samples over the MNIST dataset. + +Setup Choose $\mathcal{A} = \mathcal{X}$ , and let $l(x, a)$ be a symmetric function $(l(x, a) = l(a, x))$ that denotes the cost of transporting a unit of goods from $x$ to $a$ . When we say that a unit of goods is located according to $p$ , we mean that there is 1 unit of goods dispersed in $\mathcal{X}$ locations, where $p(x)$ is the amount of goods at location $x$ . + +Optimal Transport Distance The optimal transport distance is defined by + +$$ +O _ {\ell} (p, q) = \inf _ {r _ {X Y}, r _ {X} = p _ {X}, r _ {Y} = q _ {Y}} \mathbb {E} _ {r _ {X Y}} [ l (X, Y) ] +$$ + +Intuitively the optimal transport distance measures the following cost: initially the goods are located according to $p$ , we would like to move them to locate according to $q$ ; $O(p, q)$ denote the minimum cost to accomplish this transportation task. + +H-Divergence as Optimal Storage We first consider the intuitive interpretation to the H-entropy + +$$ +H _ {\ell} (p) = \inf _ {a} \mathbb {E} _ {p} [ \ell (X, a) ] \quad a ^ {*} = \arg \inf _ {a \in \mathcal {X}} \mathbb {E} _ {p} [ \ell (X, a) ] +$$ + +Suppose we want to move goods located according to $p$ to a storage location (for example, we want to collect all the mail in a city to a package center), then $a^*$ is the optimal location to build the storage location, and H-entropy measures the minimum cost to transport all goods to the storage location. Similarly $2H_{\ell}\left(\frac{p + q}{2}\right)$ measures the minimum cost to transport both goods located according to $p$ and goods located according to $q$ to the same storage location. The H-divergence + +$$ +2 D _ {\ell} ^ {\mathrm {J S}} (p \| q) := 2 H _ {\ell} \left(\frac {p + q}{2}\right) - \left(H _ {\ell} (q) + H _ {\ell} (p)\right) +$$ + +measures the reduction of transportation cost with two storage locations (one for $p$ and one for $q$ ) rather than a single storage location (for both $p$ and $q$ ). + +The H-Divergence is related to the optimal transport distance by the following inequality. + +Proposition 4. If $\ell$ satisfies the triangle inequality $\forall x, y, z \in \mathcal{X}, l(x, y) + l(\bar{y}, z) \geq l(x, z)$ then $D_{\ell}^{\mathrm{JS}}(p \| q) \leq \frac{1}{2} O(p, q)$ . + +Proof of Proposition 4. Let $a_q^* = \arg \inf \mathbb{E}_q[l(X, a)]$ then we have + +$$ +\begin{array}{l} 2 H _ {\ell} \left(\frac {p + q}{2}\right) = \inf _ {a} \left(\mathbb {E} _ {p} [ \ell (X, a) ] + \mathbb {E} _ {q} [ \ell (X, a) ]\right) \leq \mathbb {E} _ {p} [ \ell (X, a _ {q} ^ {*}) ] + \mathbb {E} _ {q} [ \ell (X, a _ {q} ^ {*}) ] \\ \leq \inf _ {r _ {X Y}, r _ {X} = p _ {X}, r _ {Y} = q _ {Y}} \mathbb {E} _ {r _ {X Y}} \left[ \ell (X, a _ {q} ^ {*}) \right] + \mathbb {E} _ {q} \left[ \ell (X, a _ {q} ^ {*}) \right] \\ \leq \inf _ {r _ {X Y}, r _ {X} = p _ {X}, r _ {Y} = q _ {Y}} \mathbb {E} _ {r _ {X Y}} [ \ell (X, Y) + \ell (Y, a _ {q} ^ {*}) ] + \mathbb {E} _ {q} [ \ell (X, a _ {q} ^ {*}) ] \\ = O _ {\ell} (p, q) + 2 H _ {\ell} (q) \\ \end{array} +$$ + +Intuitively, to move goods located according to $p$ and goods located according to $q$ to some storage location, one option is to first transport all goods from $p$ to $q$ (so that the goods at location $x$ will be $2q(x)$ ), then move the goods located according to $2q$ to the optimal storage location. Similarly we have + +$$ +2 H _ {\ell} \left(\frac {p + q}{2}\right) \leq O (q, p) + 2 H _ {\ell} (p) +$$ + +which combined we have + +$$ +2 D _ {\ell} ^ {\mathrm {J S}} (p \| q) = 2 H _ {\ell} \left(\frac {p + q}{2}\right) - \left(H _ {\ell} (q) + H _ {\ell} (p)\right) \leq O (p, q) +$$ + +![](images/5adf4ee8e10238f51a349b571f68aaabf3a8464fb824b419f70cd3c27f14e2a4.jpg) + +# C.2 IMPOSSIBILITY OF JENSEN-SHANNON DIVERGENCE ESTIMATION + +Suppose we have a consistent estimator for the Jenson-Shannon divergence, i.e. a function $\hat{\mathrm{JS}}$ such that for any pair of distribution $p,q$ and given $N$ i.i.d. samples $p_N\sim p$ and $q_{N}\sim q$ we have $\lim_{N\to \infty}\hat{\mathrm{JS}} (p_N\| q_N) = \hat{\mathrm{JS}} (p\| q)$ , then we prove a contradiction by the probabilistic method. + +Let $p$ be a standard Gaussian distribution, let $Q^{M}$ be a uniform distribution on a set of $M$ i.i.d. samples from $p$ (hence $Q^{M}$ is itself a random variable that depends on the i.i.d. samples). Let $Q^{*}$ be the limit of $Q^{M}$ when $M \to \infty$ (i.e. it is the uniform distribution on an infinite set of samples). Let $q^{*}$ denote a value that $Q^{*}$ can take. Because $q^{*}$ is always supported on countably many points, hence $\mathrm{JS}(p\|q^{*}) = 1$ . Note that for any $N$ the following two sampling process leads to identical distribution on $p_{N}, q_{N}$ : + +$$ +p _ {N} \sim p, q _ {N} \sim p \quad Q ^ {*} \sim p, p _ {N} \sim p, q _ {N} \sim Q ^ {*} +$$ + +Hence, the expectation of any function (including $\tilde{\mathrm{JS}}$ ) is also identical. + +$$ +\mathbb {E} _ {Q ^ {*} \sim p} \left[ \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim Q ^ {*}} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ] \right] = \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim p} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ] +$$ + +Hence the limit when $N\to \infty$ must be identical + +$$ +\lim _ {N \rightarrow \infty} \mathbb {E} _ {Q ^ {*} \sim p} \left[ \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim Q ^ {*}} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ] \right] = \lim _ {N \rightarrow \infty} \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim p} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ] +$$ + +Because the Jenson Shannon divergence is always bounded, any consistent estimator must also be bounded for sufficiently large $N$ . By the dominated convergence theorem we can exchange the expectation and the limit. + +$$ +\mathbb {E} _ {Q ^ {*} \sim p} \left[ \lim _ {N \to \infty} \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim Q ^ {*}} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ] \right] = \lim _ {N \to \infty} \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim p} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ] +$$ + +By the probabilistic method (i.e. for any function $f$ if $\mathbb{E}_{Q^* \sim p}[f(Q^*)] = 0$ there must exist some $q^*$ such that $f(q^*) \leq 0$ ) there must exist some $q^*$ such that + +$$ +\lim _ {N \to \infty} \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim q ^ {*}} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ] \leq \lim _ {N \to \infty} \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim p} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ] = 0 \neq \mathrm {J S} (p \| q ^ {*}) = 1 +$$ + +Therefore $\hat{\mathrm{JS}}$ cannot be consistent. + +# C.3 CLIMATE CHANGE EXPERIMENT DETAILS + +Setup Details In this experiment, we extract the statistics of yearly weather for each year from 1981-2019. We use the NOAA dataset, which contains daily weather data from thousands of weather stations across the globe. For each year we compute the following summary statistics: average yearly temperature, average yearly humidity, average yearly wind speed and average number of rainy days in an year. For example $x_{1990}$ is a 4 dimensional vector where each dimension corresponds to one of the summary statistics above. + +Let $p$ denote the uniform distribution over $\{x_{1981},\dots ,x_{1999}\}$ and $q$ denote the uniform distribution over $\{x_{2000},\dots ,x_{2019}\}$ . For example $\mathbb{E}_p[\ell (X,a)]$ denote the expected loss of action $a$ for a random year sampled from 1981-1999. Note that for many decision problems, it is possible to make yearly decisions (e.g. decide the best crop to plant for each year). However, because we want to measure the difference between two time periods 1981-1999 vs. 2000-2019, we choose the action space $\mathcal{A}$ to be a single crop selection that will be used for the entire time period (rather than a different crop selection for each year). Similarly for energy production we choose the action space $\mathcal{A}$ to be the proportion of different energy production methods that will be used for the entire time period. + +Crop yield We obtain the crop yield loss function $\ell (x,a)$ with the following procedure + +1. We obtain the crop yield dataset from (FAOSTAT et al., 2006), each entry we extract is the following tuple: (country code, year, crop type, yield per hectare $(\mathrm{kg / ha})$ +2. We associate each country code with the central coordinate (i.e. the average latitude and longitude) of the country. For each central coordinate we find the nearest weather station in the NOAA database. We use data for the nearest weather station as the weather data for the country. +3. Based on step 2 for each (country code, year) pair we can associate a weather statistics (i.e. the 4 dimensional vector described in Setup Details). We update each entry in step 1 to be (weather statistics, crop type, yield per hectare). +4. Based on the data entries we obtain in step 3 we train a Kernel Ridge regression model to learn the function $\ell(x, a)$ where $x$ is the weather statistics, $a$ is the crop type, and $\ell(x, a)$ is learned to predict the yield (normalized by market price) of the weather $x$ for crop type $a$ . + +Energy production We consider three types of energy production methods: solar, wind and traditional (such as fossil fuel). Solar energy and wind energy both depend heavily on weather, while traditional energy does not. In particular, we use empirical formulas for solar and wind energy calculation: + +solar $\propto$ number of sunny days * daylight hour + +wind $\propto$ wind velocity + +# D DISCUSSIONS + +Future Work In this paper we explored the applications of H-divergence to two sample testing. Future work can explore other applications of divergences. Potential applications include + +- Generative model training. Many generative models learning algorithm minimize divergences (Nowozin et al., 2016; Arjovsky et al., 2017), and future work can explore if the new divergence family leads to new generative model learning algorithms. +- Independence tests. Independence tests are two sample tests between the joint distribution $p_{XY}$ and the product of marginal distributions $p_{X}p_{Y}$ , therefore the two sample test results from this paper are applicable to independence tests. +- Robustness. Many robust optimization, estimation or prediction methods aim to achieve good performance even when the data distribution is perturbed. Typically perturbation is measured by e.g. the KL divergence or the Lp distances. Future work can measure perturbation with the H-divergence $H_{\ell}$ by choosing loss functions $\ell$ that are tailored for the problem. + +Computation Issues There are several situations where estimating the H-divergence is (provably) computationally feasible: + +- When $\mathcal{A}$ is a small finite set, in which case we can enumerate all possible values of $a \in \mathcal{A}$ . +- When the loss function $\ell(y, a)$ is convex in $a$ , in which case we can accurately estimate the H-divergence in polynomial time by solving the optimization problem $\inf_{a} \mathbb{E}[\ell(Y, a)]$ by gradient descent. + +In general, while it is difficult to guarantee computational feasibility, we use a practical technique that works well in our experiments: we use the same number of gradient descent steps for evaluating $H_{\ell}\left(\frac{p + q}{2}\right)$ and $H_{\ell}(p), H_{\ell}(q)$ . Intuitively, the "sub-optimality" when estimating the three terms is approximately the same in expectation and cancels out. \ No newline at end of file diff --git a/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/images.zip b/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4f48795e8ad5aac6adcbb2fe5de69f5dc6b5227d --- /dev/null +++ b/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ae6e6554f176eae07f2b4e53072ed4c209b3b88cdb3eab3aac590752b2d7120 +size 1033851 diff --git a/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/layout.json b/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..80d00aef3a822a2541ad9d185d1f93e191f9127d --- /dev/null +++ b/comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20b8d027af9af259c4e7184f9109c3016601cc98c0a7178c9eded77fbcdc3b24 +size 1006089 diff --git a/coordinationamongneuralmodulesthroughasharedglobalworkspace/a937660a-9b49-49b5-9181-ee4c710a6943_content_list.json b/coordinationamongneuralmodulesthroughasharedglobalworkspace/a937660a-9b49-49b5-9181-ee4c710a6943_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ca931c1573d59856c00a15f7e57e3cb50e787605 --- /dev/null +++ b/coordinationamongneuralmodulesthroughasharedglobalworkspace/a937660a-9b49-49b5-9181-ee4c710a6943_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50024d0f29307a7d394fd79a276a6bca50def494a19425bf7b67d7fabf0d4cf5 +size 140723 diff --git a/coordinationamongneuralmodulesthroughasharedglobalworkspace/a937660a-9b49-49b5-9181-ee4c710a6943_model.json b/coordinationamongneuralmodulesthroughasharedglobalworkspace/a937660a-9b49-49b5-9181-ee4c710a6943_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1448bb322805010c15c006b18e641b25b9263fce --- /dev/null +++ b/coordinationamongneuralmodulesthroughasharedglobalworkspace/a937660a-9b49-49b5-9181-ee4c710a6943_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b5a4ca2aed5e4c64131c61a90cafeaae54a4dc86f53bf58c23e6243de8a8a76 +size 173881 diff --git a/coordinationamongneuralmodulesthroughasharedglobalworkspace/a937660a-9b49-49b5-9181-ee4c710a6943_origin.pdf b/coordinationamongneuralmodulesthroughasharedglobalworkspace/a937660a-9b49-49b5-9181-ee4c710a6943_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5b19573867114ff8758a6e260a735beda2be5457 --- /dev/null +++ b/coordinationamongneuralmodulesthroughasharedglobalworkspace/a937660a-9b49-49b5-9181-ee4c710a6943_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:188b9eef1d1430220e6a30c9b597629c2013c0415c402217858056c478437221 +size 990798 diff --git a/coordinationamongneuralmodulesthroughasharedglobalworkspace/full.md b/coordinationamongneuralmodulesthroughasharedglobalworkspace/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d13df138549daa77bb3ef83916f9fc35d7b0549b --- /dev/null +++ b/coordinationamongneuralmodulesthroughasharedglobalworkspace/full.md @@ -0,0 +1,560 @@ +# COORDINATION AMONG NEURAL MODULES THROUGH A SHARED GLOBAL WORKSPACE + +Anirudh Goyal $^{1}$ , Aniket Didolkar $^{1}$ , Alex Lamb $^{5}$ , Kartikeya Badola $^{6}$ , Nan Rosemary Ke $^{2}$ , Nasim Rahman $^{1,3}$ , Jonathan Binas $^{1}$ , Charles Blundell $^{2}$ , Michael Mozer $^{4}$ , Yoshua Bengio $^{1}$ + +# ABSTRACT + +Deep learning has seen a movement away from representing examples with a monolithic hidden state towards a richly structured state. For example, Transformers segment by position, and object-centric architectures decompose images into entities. In all these architectures, interactions between different elements are modeled via pairwise interactions: Transformers make use of self-attention to incorporate information from other positions and object-centric architectures make use of graph neural networks to model interactions among entities. We consider how to improve on pairwise interactions in terms of global coordination and a coherent, integrated representation that can be used for downstream tasks. In cognitive science, a global workspace architecture has been proposed in which functionally specialized components share information through a common, bandwidth-limited communication channel. We explore the use of such a communication channel in the context of deep learning for modeling the structure of complex environments. The proposed method includes a shared workspace through which communication among different specialist modules takes place but due to limits on the communication bandwidth, specialist modules must compete for access. We show that capacity limitations have a rational basis in that (1) they encourage specialization and compositionality and (2) they facilitate the synchronization of otherwise independent specialists. + +# 1 INTRODUCTION + +Deep Learning has seen a movement towards more structured models with cleaner separation between different pieces of information often handled by different components. The induced structure, and separation of knowledge has improved generalization, model-size scaling, and long-range dependencies (Berner et al., 2019; Vinyals et al., 2019; Brown et al., 2020). This opens up questions about how to achieve coherence and coordination between different components in such architectures. Looking back to the 1980s, the focus in AI was much less on learning and more on constructing articulated, multi-component architectures and examining how intelligence might emerge from interactions among this collection of simple, functionally specialized components (Fodor, 1983; Braitenberg, 1986; Minsky, 1988; Brooks, 1991). Each + +of these specialist modules is on the scale of a typical component of a computer program, like a subroutine that implements a narrow, prespecified function from certain input contents to certain + +![](images/3da599b75b2160f632cee230993bf976d5841a31384ab4878eddaf2a5b81cbdb.jpg) +Figure 1: Step 1: an ensemble of specialist modules doing their own default processing; at a particular computational stage, depending upon the input, a subset of the specialists becomes active. Step 2: the active specialists get to write information in a shared global workspace. Step 3: the contents of the workspace are broadcast to all specialists. + +output contents. Through appropriate communication and coordination, a set of specialists can achieve complex, dynamic, and flexible behavior patterns. + +As a concrete illustration, consider the task of driving a car in terms of specialists. One specialist might monitor the position of the car with respect to lines on the road, and another specialist might adjust the steering direction based on the perceptual data. In addition, there might be specialists which provide alerts when certain events occur, such as loud sounds, reaching a critical intersection on a route, or coming into close proximity to the car in front. To execute the task of driving the car properly, all these specialists need to interact coherently and broadcast their individual information to each other. + +Arguably, modern ML and AI has yet to develop broad architectural frameworks for learning both the specialist modules and how they should interact, while the classical view lacks an articulate story about how learning could take place successfully in such frameworks. In this article, we revisit this classical view with modern machine learning tools based on end-to-end learning and differentiable memory and attention mechanisms. Inspired by the Global Workspace Theory (Baars, 1993; Dehaene et al., 1998; Shanahan and Baars, 2005; Shanahan, 2006; 2010; 2012; Dehaene et al., 2017) from cognitive neuroscience, we argue that more flexibility and generalization emerge through an architecture of specialists if their training encourages them to communicate effectively with one another via the bottleneck of a shared workspace (Figure. 1). + +Distributed specialist modules. From a computational perspective, articulated multi-component architectures composed of sparsely interacting specialist modules show desirable scaling properties (e.g., more specialists can seamlessly be added), increased robustness (the system can tolerate the removal of or changes in individual specialists), and efficiency (information is processed predominantly locally, reducing the cost of communication between specialists). However, modularization also requires mechanisms to establish sharing of compatible representations across specialists, a form of shared internal language. While portions of a task might be solved by independent specialists, synchronization is critical particularly when there are statistical, functional, or causal dependencies among the specialists. + +Coherence through a shared workspace. In cognitive neuroscience, the Global Workspace Theory (GWT) (Baars, 1993; Dehaene et al., 2017) suggests an architecture allowing specialist modules to interact. The key claim of GWT is the existence of a shared representation—sometimes called a blackboard, sometimes a workspace—that can be modified by any specialist and that is broadcast to all specialists, along with the notion that write access is limited to maintain coherence. Our interpretation of this restriction on write access is that it stems from an assumption on the form of the joint distribution between high-level concepts. In this paper, we explore a communication and coordination scheme similar to the one proposed by GWT for modern neural network architectures like Transformers (Vaswani et al., 2017; Dehghani et al., 2018; Parmar et al., 2018; Radford et al., 2019; Brown et al., 2020) and attention-based modular architectures (Goyal et al., 2019; Rahaman et al., 2020; Mittal et al., 2020a; Goyal et al., 2020; Madan et al., 2021). + +In terms of our driving example, the workspace could be used to override default behaviors by giving high priority to specialist modules which provide alerts of various sorts (loud sounds, presence of a child on the street), allowing specialists which respond to such alerts to take control of behavior over default driving routines. This scenario implies that prioritization of signals in a shared workspace is critical. + +A shared communication channel necessitates common representations. For a multitude of specialist modules to cooperate, a common language is necessary (Baars, 1997). For example, in the driving scenario, alerts may come from auditory or visual processing specialists, but regardless of the source, a signal for danger must be placed in the workspace to override default behavior, whether that behavior is controlled by a radio-tuning specialist or a steering specialist. Although specialist modules can be pre-wired to have compatible communication interfaces, we will model an architecture in which an ensemble of specialist modules is trained in coordination, which should lead to a shared language (Colagrosso and Mozer, 2005). Internally, individual specialists can use whatever form of representations that serves them, but their inputs and outputs require alignment with other specialists in order to synchronize. For example, an unusual event such as a rough thud under the wheels might not have been previously experienced, but the mere signalling of novelty + +![](images/6b072e1e2fe5baed3ba896d394821dec27df3d6455bdb1115ef5494296544525.jpg) +a) RIMs + +![](images/e2f540bdd8f371e3796abd591948d95d5a714894c901660cbb5ece7740cf91a5.jpg) +b) Transformer + +![](images/52b0ab027d77236dd9f23a6742de379d00b26928815e1597950eea98e5a649a1.jpg) +c)TIMs + +![](images/52c3fdd735ed0a3a15bf7c04ab2edbeaea90fd2f763cada31c74850dc958a033.jpg) +d) Universal Transformer + +![](images/8a1348c7511602e6623c4108df77d99dd680b388a390d268e6e9180c4e4d872d.jpg) + +Position-wise FFN + +![](images/1c036fdd148b193195204f06160e696cf19428def42f26818532f263243e4494.jpg) + +Position-wise FFN + +![](images/ae5cae014685627077ab74be49080ba6e1b6d95ec741f08fb6bd26098f16105d.jpg) + +# + +Updated state of + +# + +specialist $(\mathrm{t} + 1)$ + +Init + +Activated specialist + +# + +TIMs mechanism + +. + +Inter-module + +# + +communication + +\ + +Pairwise + +# + +communica + +1 + +Write step + +3 + +Broadcast step + +![](images/23e07aae820b62e02e0b44d896f7d5847077779d5f6ff82bb0823faaee7ee06f.jpg) +a) RIMs + SW + +![](images/3cb3059dc37bf0153349137d4205d6b10af8ac8277adc606960414bd00038c64.jpg) +b) Transformer + SW + +![](images/734c03ba0e2e51a3ae406ee6314457509956bded08f9c1dbef6b67af3aafa396.jpg) +c)TIMs+SW +Figure 2: Using a Shared Workspace for creating global coherence in RIMs, Transformers, TIMs and Universal Transformers (UT). (Top Half) All four of these architectures use pairwise communication (using key-value attention) to establish coherence between individual specialist modules. In the case of RIMs (Goyal et al., 2019) and TIMs (Lamb et al., 2021), these specialists are independent modules that compete with each other in order to take control over the state update based on a given input. In the case of Transformers (Vaswani et al., 2017) and Universal Transformers (Dehghani et al., 2018), each specialist is associated with a different position. Activated specialists are denoted by a blue shade and the intensity depends on the degree of activation. In the case of Universal Transformers, the state update dynamics for each position is shared across all layers and all positions (denoted by a yellow triangle). (Bottom Half) We replace pairwise communication with a shared workspace to create global coherence between different specialists. Communication using the shared workspace is a two-step process (as denoted by 1 and 2 in the figures). In the first step (1), specialists compete for write access to the shared workspace, resulting in a subset of them being activated (in blue), and only the activated specialists perform the write operation on the workspace. In the second step (2), the contents of the shared workspace are broadcast to all the specialists. + +![](images/c3f13e2a3c8c8ab062de8a247baa4fd8694bc46faac4df152aec82d415960a45.jpg) +d) Universal Transformer + SW + +![](images/8e3c6748cd6b70b4c8bdb05ed9a06a32d71712638f468ca3aabac3aa14a42f19.jpg) + +could override default specialists. Without a global communication channel, specialists would have to learn to communicate through pairwise interactions, which might limit coordination of behavior in novel situations: global communication ensures exchangeability of knowledge to achieve systematic generalization. + +# 2 SYNCHRONIZING NEURAL MODULES THROUGH A SHARED WORKSPACE + +We investigate a neural architecture reminiscent of the GW model, where a number of sparsely communicating specialist modules interact via a shared working memory. In particular, we extend the Transformer (Vaswani et al., 2017), attention and slot-based modular architectures (Goyal et al., 2019) by adding a shared workspace and allowing modules (each representing an entity) to compete for write access in each computational stage. + +Key-value attention. Key-value attention defines the backbone of updates to the hidden states in the proposed model. This form of attention is widely used in self-attention models and performs well on a wide array of tasks (Bahdanau et al., 2014; Vaswani et al., 2017; Santoro et al., 2018). Key-value attention selects an input value based on the match of a query vector to a key vector associated with each value. To allow differentiability and thus easier learnability, selection is soft and computes a convex combination of all the values. Such a mechanism makes it possible to change on-the-fly both the source of input and how the shared workspace is updated. It also makes the outputs of the specialists and the elements of the memory permutation invariant: they should be considered as an unordered set of elements to be selected by an attention mechanism from the contents of specialists. More precisely, soft attention uses the product of a query (represented as a matrix $Q$ of dimensionality $N_{r} \times d$ , with $N_{r}$ queries, and $d$ the dimension of each query) with a set of $N_{o}$ objects each associated with a key as a row in matrix $K^{T}(N_{o} \times d)$ . After normalization with a softmax the resulting convex weights are used to combine the values $V_{i}$ (row $i$ of matrix $V$ ): where the softmax is applied to each row of its argument matrix, yielding a set of convex weights. For our experiments, we use multihead dot product attention. + +Neural modules with pairwise interactions. Our approach to synchronizing neural modules is highly general and mostly agnostic to the task, domain, or specific choice of architecture, with the + +only requirement being that the model consists of multiple specialist modules which either operate independently or have sparse interactions requiring to only match pairs of modules at a time. Our goal is to explore how introducing a shared workspace can help these modules to become better synchronized and coordinated. We show the utility of the shared workspace for synchronization in (a) Transformers (Vaswani et al., 2017), in which all interactions between positions are performed via attention, and (b) slot-based architectures like Recurrent Independent Mechanisms or RIMs (Goyal et al., 2019) in which all pairwise interactions between modules are performed via attention. In the context of slot-based architectures, each slot's content is associated with a specialist module, whereas in Transformers different entities each associated with a different position acts as a specialist module (Figure 2). + +Both Transformers and RIMs utilize a self-attention mechanism for sharing information between modules, typically implemented in a pairwise manner, i.e., each specialist attends to every other specialist. Instead, we facilitate information sharing among specialist modules through a limited capacity shared workspace. In this framework at each computational stage, different specialists compete for write access to the common workspace. The contents of the workspace, in turn, are broadcast to all specialist modules simultaneously. + +Notation. The input is processed through a sequence of computational stages indexed by $t$ , and at each stage, $n_s$ entities are operated on (i.e., $n_s$ different modules in slot-based architectures like RIMs or $n_s$ different positions in the case of Transformers). Each of these $n_s$ specialist modules has a distinct internal $n_h$ -dimensional state $h_t^k$ , for $k \in \{1, \dots, n_s\}$ . The specialist modules communicate with each other via a shared workspace divided into $n_m$ memory slots, each consisting of a vector of $n_l$ elements, denoted $M = [m_1; \dots, m_j; \dots, m_{n_m}]$ . The shared workspace is updated across different computational stages i.e., different time-steps in recurrent architecture and different layers in the case of Transformers. At each computational stage $t$ , different specialists compete for writing in the shared workspace, but all specialists can read from the current state of the workspace. In the case of an autoregressive task, we can restrict the information sharing to previous positions and keep a separate version of the workspace for each position. + +# 2.1 SPECIFICS OF THE SHARED WORKSPACE. + +Step 1: Process Input to obtain an entity representation for each specialist. The first step is external to the proposed method, and involves processing the input to form the initial representation vector for each of the different specialists. Different common deep learning architectures can be used to form the representation of different specialists. For example, Transformers start with a matrix $n_s \times n_h$ whose rows are initialized as the $n_h$ -dimensional embeddings of the input at each position of the sequence. Slot-Based Recurrent architectures like RIMs consist of a single-layer recurrent structure where the hidden state $\mathbf{h}_t$ at computational stage $t$ is decomposed into the substates of the $n_s$ specialists, $\mathbf{h}_t^k$ for $k = 1, \ldots, n_s$ . + +In the proposed scheme, within each computational stage, the updates of the hidden state of different specialists follow a two-step process. First, specialists compete and write to a shared workspace. Second, information from the workspace gets broadcast to all the specialists, as detailed next. + +Step 2: Writing Information in the shared workspace. The specialists compete to write into the shared workspace, whose contents need to be updated in the context of new information received from different specialists. This step ensures that only the critically important signals make it to the shared workspace, therefore preventing the workspace from being cluttered. Let matrix $\pmb{R}$ represent the combined state of all the specialists (i.e. $h_t^k \forall k \in \{1, \dots, n_s\}$ as the rows of $\pmb{R}$ ). In order to implement the competition between specialists to write into the workspace, we use a key-query-value attention mechanism. In this case, the query is a function of the state of the current workspace memory content, represented by matrix $M$ (with one row per slot of the memory), i.e. $\widetilde{\pmb{Q}} = M\widetilde{\pmb{W}}^q$ . Keys and values are a function of the information from the specialists i.e., a function of $\pmb{R}$ . We apply dot product attention to get the updated memory matrix: $M \gets \text{softmax} \left( \frac{\widetilde{Q}(R\widetilde{W}^e)^{\mathrm{T}}}{\sqrt{d_e}} \right) R\widetilde{W}^v$ . The use of a regular softmax to write into $M$ leads to a standard soft competition among different specialists to write in the shared workspace. One can also use a top- $k$ softmax (Ke et al., 2018) to select a fixed number of specialists allowed to write in the shared workspace: based on the pre-softmax values, a fixed number of $k$ specialists which have the highest values are selected, and get access to write in the shared workspace. Selection with a top- $k$ softmax is a hybrid between hard and soft selection. We + +denote the set of thus selected specialists as $\mathcal{F}_t$ . We note that we can apply the attention mechanism multiple times to distill information from different specialists into the shared workspace. Here, the contents of the shared workspace are updated in the gated way as proposed in RMC (Santoro et al., 2018). We ask the reader to refer to appendix section C for more details. + +Step 3: Broadcast of information from the shared workspace. Each specialist then updates its state using the information broadcast from the shared workspace. We again utilize an attention mechanism to perform this consolidation. All the specialists create queries $\widehat{\pmb{q}}_k = \pmb{h}_t^k\widehat{\pmb{W}}^q$ , which are matched with the keys $\widehat{\pmb{\kappa}}_j = (\pmb{m}_j\widehat{\pmb{W}}^e)^{\mathrm{T}}$ $\forall k\in \{1,\dots ,n_s\}$ , $j\in \{1,\dots ,n_m\}$ from the updated memory slots, forming attention weights $s_{k,j} = \mathrm{softmax}\left(\frac{\widehat{\pmb{q}}_k\widetilde{\pmb{\kappa}}_j}{\sqrt{d_e}}\right)$ . The memory slot values generated by each slot of the shared workspace and the attention weights are then used to update the state of all the specialists: $\pmb{h}_t^k\gets \pmb {h}_t^k +\sum_j s_{k,j}\widehat{\pmb{v}}_j$ where $\widehat{\pmb{v}}_j = \pmb {m}_j\widehat{\pmb{W}}^v$ $\forall k\in \{1,\ldots ,n_s\}$ . After receiving the broadcast information from the workspace, each specialist update their state by applying some dynamics function i.e., one step update of LSTM or GRU units in the case of recurrent architectures, and a feedforward layer in the case of Transformers. This yields the new value $\pmb{h}_{t + 1}^{k}$ for the $k$ -th specialist, from which we start the next stage $(t + 1)$ . + +Replacing pairwise interactions among neural modules with interaction facilitated by the shared workspace allows for the following: + +1. Higher-order (HO) interaction among neural modules. The two-step write-read process first allows each memory slot to store a 'filtered summary' of the current input where the 'filter' is determined by the previous state of that slot ('Query' for the write step). Neural modules then summarize the information contained in these slots and update their state. Hence unlike pairwise interaction, messages passed among neural modules in the shared workspace setting also include HO interaction terms; those consisting of more than 2 modules at a time. Naturally, HO interaction require that messages passed among neural modules lie in the same representation space, which is precisely what we aim to achieve by allowing message passing only via a singular global channel. + +2. Dynamic filtering due to persistence of memory. With a shared workspace (SW), contents of the memory slot play a key role in filtering and summarizing the information contained in the input at a given time step. Persistence of memory throughout an episode 1) would allow the memory layer to summarize and filter information based on what it has seen thus far 2) should ideally lead to better generalization as the model is able to dynamically modify its filtering machinery for a particular input. In contrast, "inducing points" in Set Transformers (Lee et al., 2019) are fixed after training and hence the bottleneck cannot adjust itself on the fly for any new input. We present comparisons on several tasks in section 4. They show the importance of these two properties by comparing performance of SW with a) $2 \times$ Self-Attention (to simulate HO interaction without global communication) b) a version without memory persistence, in Appendix D. + +Computational Complexity of using shared workspace for synchronizing different specialists. To encourage a coherent global coordination, Transformers and slot-based recurrent architectures rely on pairwise interactions captured via an attention mechanism. Unfortunately, such attention mechanisms scale quadratically with the number of specialists. Here, we propose a method which uses a shared workspace to create global coherence between different specialists and in the process, replaces the pairwise interactions of conventional dot-product attention. The computational complexity of the proposed method is thus linear in the number of specialists. In our experimentation, the number of memory slots is practically constant, which suggests a very favourable scaling behavior, and certainly much less than quadratic. As a point of reference, what would correspond to the number of slots in human working memory (Baars, 1993) is indeed very small (less than 10 slots). + +# 3 RELATED WORK + +This work taps into a line of reasoning put forward by historical works, such as Minsky (1988); Braitenberg (1986); Fodor (1983), wherein it is argued that in order to be able to deal with a wide spectrum of conditions and tasks, an intelligent system should be comprised of many interacting specialized modules or programs, rather than a single "one-size-fits-all" entity. While modular architectures have been the subject of a number of research directions, (Jacobs et al., 1991; Bottou and Gallinari, 1991; Ronco et al., 1997; Reed and De Freitas, 2015; Andreas et al., 2016; Rosenbaum et al., 2017; Fernando et al., 2017; Shazeer et al., 2017; Rosenbaum et al., 2019; Goyal and Bengio, + +2020), we focus here on a mechanism for achieving coherence and synchronization between specialist modules via a global workspace shared between all specialists. + +Prior works have explored incorporating slot-based memory in the context of recurrent neural networks (Graves et al., 2014; 2016; Santoro et al., 2018). In the context of transformers, Burtsev and Sapunov (2020) introduce memory tokens that are processed in addition to sequence tokens, whereas Dai et al. (2019) (Transformer-XL) propose to partition a long sequence to smaller segments and use the activations of the previous segment in memory while processing the current segment. Building on the latter, Rae et al. (2019) propose to store activations from prior segments in a compressed memory. However, these methods do not restrict memory writes to be sparse and competitive. Recent advances in this direction include the global neuronal workspace (GNW) model (Dehaene and Changeux, 2011), which identifies the global workspace with a large network of excitatory pyramidal neurons with long-range axonal processes connecting prefrontal and parietal cortices. Further, deploying a shared workspace to establish coherence between different specialists as opposed to using all-pair communication has an added benefit, in that it allows us to tackle the $O(n^2)$ complexity of self-attention. This makes our work related to previous work on reducing the computational complexity of dot product attention in Transformers. Lee et al. (2019) introduce the ISAB module, which maps between sets and comprises two dot-product attention layers. In the first layer, a set of trainable parameters are used as queries and the elements of the input set as keys; in the second layer, the output of the first layer is used as keys and the input set as queries. However, unlike in this work, the intermediate states (corresponding to the output of the first layer) are not maintained across layers. Concurrent to our work, (Jaegle et al., 2021) also introduced the idea of using a latent bottleneck for addressing quadratic complexity by learning a bottleneck but there are important differences. For example, in Perceiver the latent bottleneck iteratively queries the information about different positions, and does not maintain the representation of the different specialists. More precisely, in our proposed method different specialists write information in the workspace and then information gets read from the shared workspace. In Perceiver, the latent bottleneck iteratively reads information from the set of positions. We also show the applicability of the proposed idea both for slot based models and Transformers. + +The proposed model can also be seen as integrating out different ideas popular in modular architectures (Andreas et al., 2016; Goyal et al., 2019), memory networks (Graves et al., 2014; Santoro et al., 2018) and mixture of experts (Jacobs et al., 1991), and hence combining some of their benefits in a unified architecture. The proposed model is factored as a set of specialists (incorporating modularity). The proposed model achieves coordination among different specialists via the use of a shared workspace (in the Neural Turing machines, there is only a single specialist i.e., without any modularity). Multiple experts can be active at the same time (generally not the case with a mixture of experts). + +# 4 EXPERIMENTS + +Here we briefly outline the tasks on which we applied the idea of the shared workspace and direct the reader to the appendix for some more experiments (Appendix G), full details on each task and details on hyperparameter settings for the model. The experiments have the following goals: (a) Demonstrate that the use of the shared workspace can improve results on a wide array of challenging benchmark tasks, with the goal of demonstrating the practical utility and breadth of the technique. (b) Show that the shared workspace addresses coherence between different specialists by achieving improved performance without requiring all pairwise interactions. Finally, to show wide applicability of our model, we integrate SW in TIMs (Lamb et al., 2021), SCOFF (Goyal et al., 2020) and BRIMs (Mittal et al., 2020b) and show improvements over the default communication method used in each. + +Making sense of the visual input. Using a shared workspace introduces a bottleneck in sharing of information between specialists. Since the size of the workspace is limited and generally much lower than the number of specialists, there is a limit to the amount of information that can be exchanged among specialists. We hypothesize that mediating communication through a limited capacity workspace should encourage the model to look at relevant information that is important for the downstream objective. We test this hypothesis on a set of visually challenging benchmarks. For our experiments, we use either Transformers or RIMs as a backbone. We consider variants of Transformers based on different subsets of important properties. Transformers [TR]: Self-attention based multi-layer architecture (Vaswani et al., 2017) with shared parameters across layers. Set transformer [ISAB]: Transformers where self attention is replaced by ISAB module (Lee et al., 2019). Sparse + +Figure 3: Detecting Equilateral Triangles. +![](images/afec0016c10422111f21a97a1abe8eb70fec5c0d7c0c12ed4d0f058d35a900cf.jpg) +Here, we compare the performance of the Transformers with shared workspace to other Transformer baselines. Here, we plot the test accuracy for each model. + +
ModelTop-1 %Top-5 %
ISAB65.3±0.02583.6±0.011
STR70.6±0.0887.33±0.06
TR70.83±0.4487.8±0.08
TR + HC70.17±0.3188.33±0.2
TR + HSW (OURS)71.07±0.0488.6±0.49
TR + SSW (OURS)71.33±0.3488.3±0.05
+ +Table 1: Comparison on CATER Object Tracking. Here, we compare the Top-1 and Top-5 accuracy of Transformers with shared workspace and Transformers with self-attention. We can see that Transformers with a shared workspace outperform those with pairwise self-attention. + +Transformers [STR]: Transformers with sparse factorizations of the attention matrix (Child et al., 2019). High Capacity Transformers [TR+HC]: Same as TR but with different parameters across layers. Transformers with Shared Workspace with soft-competition [TR+SSW]: Transformers with different positions competing with each other to write in shared workspace using soft-competition. Transformers with Shared Workspace with top- $k$ competition [TR+HSW]: Transformers with different positions competing with each other to write in shared workspace using top- $k$ competition. For a more detailed description of all the tasks described below, we ask the reader to appendix section E. + +Detecting Equilateral Triangles. We first use a simple toy task to test our hypothesis where the model should detect equilateral triangles in images (Ahmad and Omohundro, 2009). Each image is of size $64 \times 64$ and contains 3 randomly placed clusters of points. For equilateral triangles, the midpoints of these clusters are equidistant from each other. This is a binary classification task where the model has to predict whether the three given clusters form an equilateral triangle or not. To feed an image into a Transformer, we follow the same methodology as used in vision Transformers (Dosovitskiy et al., 2020). We first divide an image into equal sized $4 \times 4$ patches and treat each patch as a different input position of the Transformer. + +To solve this task correctly, the model only needs to attend to relevant information i.e., to patches that contain the cluster of points. Therefore, using a limited capacity shared workspace should be useful here. Our results (presented in Figure + +3) confirm this hypothesis. We can see that Transformers with shared workspace attention converge much faster and reach higher accuracy as compared to the baseline Transformer. Our method also outperforms Set Transformer by a significant margin. + +Multi MNIST Generation. In this task, we train an Image Transformer (Parmar et al., 2018) (pixel-by-pixel, raster-order generative model) for next-pixel prediction on the "MultiMNIST dataset" where each image consists of 4 independently sampled MNIST digits stacked horizontally to form one image (see Figure 10 for demonstration). The main aim of this task is to observe the inductive biases that allow for specialization of mechanisms in TIMs (Lamb et al., 2021). Each image in the MultiMNIST dataset can be broken down into different sets of independent spatial components. Since the digits which make up the image are independently selected, the joint distribution of pixel intensities in any one of the four sections of the image is statistically independent of the pixel intensities in any other section of the image. Moreover each section of the image can be further broken down into independent spatial components: one that pertains to the background and one that pertains to the foreground. One can expect that architectures that are made up of sparsely interacting + +![](images/80a73d391ccac47cc879c95a4d72362c65c8d89b69c5f617da9d3fc586743edd.jpg) +Figure 4: Comparison on Sort-of-CLEVR relational reasoning. Speed of convergence for relational and non-relational questions in the sort-of-clevr dataset. We can see that the proposed model converges much faster than the baselines in both cases. + +different mechanisms to naturally capture this statistical independence by dividing labour among different mechanisms. While, for monolithic architectures, a major portion of their training time will be spent in learning these statistical independencies from scratch. We find that replacing the pairwise communication in TIMs with a shared workspace (TIMs + SW) leads to better and more interpretable division of labor among specialists as shown in Figure 5. From the figure, It is clear that the TIMs model is unable to divide labour among specialists with mechanism 2 being activation for all the pixels in the image. On the other hand, we can see that TIMs + SW is able to divide labor among specialists with each mechanism focusing on a different aspect of the image. We can see that mechanism 2 gets activated for the digits which are present towards the centre of each of the 4 columns while mechanisms 3 and 4 cover the background of the digits, with mechanism 3 covering the area between adjacent digits and mechanism 4 covering the area above and below the digits. Thus, we can see that using a shared workspace aids the division of labor among different specialists. We also find that TIMs + SW results in the least cross-entropy loss in the test set when compared to TIMs and Image Transformers (Parmar et al., 2018). Results shown in appendix Table 5. + +CATER: Object Tracking. Cater is a spatiotemporal reasoning video dataset introduced in Girdhar and Ramanan (2019). Each video contains 3D objects organized in a $6 \times 6$ grid. Each object affords certain actions that can be performed on them. These actions result in movement of the concerned objects and change in their positions. Some of these actions include: rotate, pick-place, slide, contain. Throughout the duration of the video, a number of these actions are performed to get the final state of the grid. Note that only a single object undergoes an action, at any instant. The task that we focus on here is called localization. In this task, the goal is to predict the location of the target object in the final frame. In this case the target object is called a snitch. The snitch as well as the other objects move across the $6 \times 6$ grid. In some scenarios, the snitch may be covered by other objects hence hiding it from the view. In such cases, tracking the movement of the snitch across frames becomes essential. Therefore, capturing long-range temporal dependencies is essential to solve this task. + +The information exchange limit enforced by the limited capacity of the shared workspace should + +be useful here as well. For CATER, in some frames the snitch is not visible as it is covered by other objects. Therefore, ideally the model only needs to attend to frames in which the snitch is visible. Additionally, if the snitch is visible throughout the video in all frames, then to accurately predict the final position of the snitch, the model only needs to attend to the final frame of the video and can completely ignore the initial frames. The results for this task are presented in Table 1. We also experimented with both soft competition TR+SSW and hard competition TR+HSW, with only $k = 5$ specialists writing into the shared workspace. We can see that models with a shared workspace outperform those with pairwise multihead attention thus confirming our hypothesis about the benefits of a shared workspace for this task. As shown in Table 1 proposed method convincingly outperforms the Set Transformer. + +Relational Reasoning : Sort-of-CLEVR. In relational reasoning, the model is tasked with answering questions about certain properties of various objects and their relations with other objects. The model is presented with an image and a question for that image. This task has a clear sparse structure as in order to answer the questions correctly, it needs to only reason about a specific subset of objects that the question mentions. For this task, we use the Sort-of-CLEVR dataset (Santoro et al., 2017). + +Each image in Sort-of-CLEVR is of size $75 \times 75$ and contains 6 randomly placed geometrical shapes of 6 possible colors and 2 possible shapes. Each image comes with 10 relational questions and 10 non- + +![](images/68e48d1f1755824aa8b6b11c00104eaf852599fa42593dc108cd30f8dd70332a.jpg) +Figure 5: This figure shows the mechanism activation map for all 4 mechanisms used in the multi-mnist generation task for both TIMs and TIMs + SW. Both the images in the figure correspond to the activation maps from 4 different examples. Each activation map contains 4 mechanisms shown from left to right in a single row. Each mechanism is shown using a $32 \times 32$ image, a particular pixel in a mechanism activation map is shown in white if that mechanism was used during the generation of that pixel while generating the image. + +The snitch is not visible as it is covered by other frames to attend to frames in which the snitch is visible. In the video in all frames, then to accurately predict needs to attend to the final frame of the video and results for this task are presented in Table 1. We also SSW and hard competition TR+HSW, with only one. We can see that models with a shared workspace on thus confirming our hypothesis about the benefits of Table 1 proposed method convincingly outperforms + +relational questions. Non-relational questions only consider properties of individual objects. On the other hand, relational questions consider relations among multiple objects. For more details about the question see appendix Figure 8. The input to the model consists of the image and the corresponding question. We first obtain a sequence of equal-sized patches for the image as in vision Transformers (Dosovitskiy et al., 2020). We concatenate the resulting patch sequence with the representation of the question and pass the combined sequence through the Transformer. Sort-of-CLEVR has a finite number of possible answers, hence this task is setup as a classification task. + +We present the results for this task in Figure 4. We observe that the Transformers with the shared workspace converge faster and outperform the baselines for relational as well as non-relational questions. The superior performance with shared memory can be attributed to the inherent sparsity of this task. For instance, in non-relational questions, the model only needs to attend to a single object referenced in the question to answer it correctly, while relational questions only consider a small subset of objects in the image, thus sparsity is helpful for both these types of questions. Therefore, the limited capacity of the shared workspace forces the model to attend to only relevant information. + +Shared Workspace for Physical Reasoning. In this task, we consider a set of bouncing balls and the model is tasked with predicting the trajectory of the balls at each step. In order to solve this task, a coherent picture of where and which objects will collide needs to be established by the learner. We use the bouncing-ball dataset from Van Steenkiste et al. (2018). We train the model for + +next-step prediction. We compare the proposed approach against SCOFF (Goyal et al., 2020). The results of our comparison are shown in Table 2. We use the ARI and MSE metric for comparison. ARI measures how well the different balls are segregated into different slots, higher ARI means better segregation. We can see that using a shared workspace results in higher ARI as compared to pairwise communication in SCOFF. Thus, using a shared workspace results in better division of labor among specialists. We also compare the proposed method against other baselines in appendix section F.1. + +
ModelNum. SlotsARI ↑MSE ↓
SCOFF-0.276±0.0010.083±0.0
SCOFF + SW20.154±0.0070.135±0.002
SCOFF + SW40.487±0.0850.059±0.0
SCOFF + SW50.915±0.00.035±0.0
SCOFF + SW80.891±0.0010.039±0.0
SCOFF + SW100.351±0.0010.08±0.0
+ +Table 2: Here we show the performance of SCOFF augmented with shared workspace attention on the bouncing balls task. We also analyse the effect of varying number of slots in the shared workspace. This also shows that by increasing the number of slots performance decreases hence validating claims regarding bandwidth limited communication channel via shared workspace. + +Shared Workspace for Atari Video Games. We start by training RIMs, RIMs + shared workspace (SW) on three "source" games (Pong, River Raid, and Seaquest) and test if the learned features transfer to a different subset of randomly selected "target" games (Alien, Asterix, Boxing, Centipede, Gopher, Hero, James Bond, Krull, Robotank, Road Runner, Star Gunner, and Wizard of Wor). We take a sufficient number of specialists in RIMs (10). We train on source games for 10M steps, and then fine-tune on transfer games for 10M more steps. We choose these games as they were also used in the original RIMs paper (Goyal et al., 2019). Using a suite of 36 game pairs, we find that RIMs + SW outperforms RIMs on both game A (a median performance ratio of 1.13; mean of 1.16) and game B (a median performance ratio of 1.11; mean of 1.15). The improved performance with RIMs + SW is due to better forward transfer (knowledge acquired for game A facilitates the learning of game B) and reduced backward interference (knowledge acquired for game B does not disrupt knowledge acquired for game A), presumably thanks to a more appropriate modularization of knowledge. + +# 5 CONCLUSION + +Inspired by cognitive neuroscience global workspace theories, we have proposed a shared workspace model for establishing coherence among modular neural specialists while exchanging information in a systematic way. We show that using a limited capacity shared workspace as a bottleneck for mediating communication among specialists results in better performance across a wide range of visual reasoning benchmarks as compared to the pairwise interactions typically used in self-attention schemes. The proposed approach combines several key properties: knowledge and expertise is divided among specialists, they compete to post new contents to the workspace, and after being updated, the shared workspace is accessible to all specialists for their own updates. + +# ETHICS STATEMENT + +The authors do not foresee any negative social impacts of this work, but of course the accumulation of improvements in ML could be misused as it may give more power to nefarious agents. + +# REPRODUCIBILITY STATEMENT + +We use Algorithms 1 and 2 for our experiments, we will be releasing the code after the review process. We also provide our code in the supplementary material. + +# REFERENCES + +S. Ahmad and S. Omohundro. Equilateral triangles: A challenge for connectionist vision. 2009. +Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 39-48, 2016. +Bernard J Baars. A cognitive theory of consciousness. Cambridge University Press, 1993. +Bernard J Baars. In the theatre of consciousness. global workspace theory, a rigorous scientific theory of consciousness. Journal of Consciousness Studies, 4(4):292-309, 1997. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. +Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680, 2019. +Léon Bottou and Patrick Gallinari. A framework for the cooperation of learning algorithms. In Advances in neural information processing systems, pages 781-788, 1991. +Valentino Braitenberg. Vehicles: Experiments in synthetic psychology. MIT press, 1986. +Rodney A Brooks. Intelligence without representation. Artificial intelligence, 47(1-3):139-159, 1991. +Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. +Mikhail S Burtsev and Grigory V Sapunov. Memory transformer. arXiv preprint arXiv:2006.11527, 2020. +Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019. +Michael D. Colagrosso and Michael C Mozer. Theories of access consciousness. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 289-296. MIT Press, 2005. URL http://papers.nips.cc/paper/2715-theories-of-access-consciousness.pdf. +Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019. +S. Dehaene, H. Lau, and S. Kouider. What is consciousness, and could machines have it? Science, 358(6362):486-492, 2017. +Stanislas Dehaene and Jean-Pierre Changeux. Experimental and theoretical approaches to conscious processing. Neuron, 70(2):200-227, 2011. + +Stanislas Dehaene, Michel Kerszberg, and Jean-Pierre Changeux. A neuronal model of a global workspace in effortful cognitive tasks. Proceedings of the national Academy of Sciences, 95(24): 14529-14534, 1998. +Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Universal transformers. arXiv preprint arXiv:1807.03819, 2018. +Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. +Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A Rusu, Alexander Pritzel, and Daan Wierstra. Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734, 2017. +Jerry A Fodor. The modularity of mind. MIT press, 1983. +Rohit Girdhar and Deva Ramanan. CATER: A diagnostic dataset for compositional actions and temporal reasoning. CoRR, abs/1910.04744, 2019. URL http://arxiv.org/abs/1910.04744. +Anirudh Goyal and Yoshua Bengio. Inductive biases for deep learning of higher-level cognition. arXiv preprint arXiv:2011.15091, 2020. +Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, and Bernhard Scholkopf. Recurrent independent mechanisms. arXiv preprint arXiv:1909.10893, 2019. +Anirudh Goyal, Alex Lamb, Phanideep Gampa, Philippe Beaudoin, Sergey Levine, Charles Blundell, Yoshua Bengio, and Michael Mozer. Object files and schemata: Factorizing declarative and procedural knowledge in dynamical systems. arXiv preprint arXiv:2006.16225, 2020. +Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. CoRR, abs/1410.5401, 2014. URL http://arxiv.org/abs/1410.5401. +Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626): 471-476, 2016. +Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. Neural computation, 3(1):79-87, 1991. +Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, and Joao Carreira. Perceiver: General perception with iterative attention. arXiv preprint arXiv:2103.03206, 2021. +Andrej Karpathy. karpathy/tingpt, Aug 2020. URL https://github.com/karpathy/minGPT. +Nan Rosemary Ke, Anirudh Goyal ALIAS PARTH GOYAL, Olexa Bilaniuk, Jonathan Binas, Michael C Mozer, Chris Pal, and Yoshua Bengio. Sparse attentive backtracking: Temporal credit assignment through reminding. In Advances in neural information processing systems, pages 7640-7651, 2018. +Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Alex Lamb, Di He, Anirudh Goyal, Guolin Ke, Chien-Feng Liao, Mirco Ravanelli, and Yoshua Bengio. Transformers with competitive ensembles of independent mechanisms, 2021. URL https://openreview.net/forum?id=1TIrbngpW0x. +Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In International Conference on Machine Learning, pages 3744-3753, 2019. + +Kanika Madan, Nan Rosemary Ke, Anirudh Goyal, Bernhard Schölkopf, and Yoshua Bengio. Meta attention networks: Meta-learning attention to modulate information between recurrent independent mechanisms. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=Lc28QAB4ypz. +Marvin Minsky. Society of mind. Simon and Schuster, 1988. +Sarthak Mittal, Alex Lamb, Anirudh Goyal, Vikram Voleti, Murray Shanahan, Guillaume Lajoie, Michael Mozer, and Yoshua Bengio. Learning to combine top-down and bottom-up signals in recurrent neural networks with attention over modules. In International Conference on Machine Learning, pages 6972-6986. PMLR, 2020a. +Sarthak Mittal, Alex Lamb, Anirudh Goyal, Vikram Voleti, Murray Shanahan, Guillaume Lajoie, Michael Mozer, and Yoshua Bengio. Learning to combine top-down and bottom-up signals in recurrent neural networks with attention over modules. In International Conference on Machine Learning, pages 6972-6986. PMLR, 2020b. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019. +Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In International Conference on Machine Learning, pages 4055-4064. PMLR, 2018. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. +Jack W Rae, Anna Potapenko, Siddhant M Jayakumar, and Timothy P Lillicrap. Compressive transformers for long-range sequence modelling. arXiv preprint arXiv:1911.05507, 2019. +Nasim Rahaman, Anirudh Goyal, Muhammad Waleed Gondal, Manuel Wuthrich, Stefan Bauer, Yash Sharma, Yoshua Bengio, and Bernhard Scholkopf. S2rms: Spatially structured recurrent modules. arXiv preprint arXiv:2007.06533, 2020. +Scott Reed and Nando De Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279, 2015. +Eric Ronco, Henrik Gollee, and Peter J Gawthrop. Modular neural networks and self-decomposition. Technical Report CSC-96012, 1997. +Clemens Rosenbaum, Tim Klinger, and Matthew Riemer. Routing networks: Adaptive selection of non-linear functions for multi-task learning. arXiv preprint arXiv:1711.01239, 2017. +Clemens Rosenbaum, Ignacio Cases, Matthew Riemer, and Tim Klinger. Routing networks and the challenges of modular and compositional computation. arXiv preprint arXiv:1904.12774, 2019. +Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. In Advances in neural information processing systems, pages 4967-4976, 2017. +Adam Santoro, Ryan Faulkner, David Raposo, Jack Rae, Mike Chrzanowski, Theophane Weber, Daan Wierstra, Oriol Vinyals, Razvan Pascanu, and Timothy Lillicrap. Relational recurrent neural networks. In Advances in Neural Information Processing Systems, pages 7299-7310, 2018. +Murray Shanahan. A cognitive architecture that combines internal simulation with a global workspace. Consciousness and cognition, 15(2):433-449, 2006. +Murray Shanahan. Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds. Oxford University Press, USA, 2010. +Murray Shanahan. The brain's connective core and its role in animal cognition. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1603):2704-2714, 2012. + +Murray Shanahan and Bernard Baars. Applying global workspace theory to the frame problem. Cognition, 98(2):157-176, 2005. +Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017. +Sjoerd Van Steenkiste, Michael Chang, Klaus Greff, and Jürgen Schmidhuber. Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. arXiv preprint arXiv:1802.10353, 2018. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008, 2017. +Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michael Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575(7782):350-354, 2019. + +# Part I + +# Appendix + +# A PSEUDO CODES + +Alg. 1 shows the integration of shared workspace with RIMs (Goyal et al., 2019). We replace the direct module to module interaction via attention in RIMs, with shared workspace. Specialists compete to write in the shared workspace, and the contents of the workspace are broadcasted to all the specialists. + +Alg. 2 shows the integration of the shared workspace with TIMs (Lamb et al., 2021). Again we replace the direct module to module communication in TIMs, with a shared workspace. + +# Algorithm 1: Shared Workspace integration with RIMs + +Input: Current sequence element, $\mathbf{x}_t$ and previous state of the specialist, $\{h_{t-1,k}\}$ , for $k \in \{1, \dots, n_s\}$ and structure of memory as a matrix $M$ with row wise compartmentalized memories, where $m_i$ refers to the state of slot $i$ (total number of slots is $n_m$ ). + +Step 1: Process image by position $p$ with fully convolutional net + +$\pmb{c}_p = [\mathrm{CNN}(\pmb{x}_t)]_p$ +- $\mathbf{z}_t = \left[ \begin{array}{ll} \mathbf{c}_p & \mathbf{e}_p \end{array} \right]$ (concatenate encoding of position to CNN output) + +Step 2: Specialists compete to be selected to update the workspace based on current input + +$q_{k} = h_{t - 1,k}W^{q}$ +$s_k = \mathrm{softmax}\left(\frac{q_k\kappa}{\sqrt{d\varepsilon}}\right)$ , where $\pmb {\kappa} = (\pmb {z}_t\pmb {W}^e)^{\mathrm{T}}$ +- Construct a set $\mathcal{F}_t$ which contains the indices of the $n_{\mathrm{sel}}$ specialists that have the largest $s_k$ + +$$ +\bullet \bar {\boldsymbol {h}} _ {t, k} = \left\{ \begin{array}{l l} g _ {k} \left(s _ {k} \boldsymbol {z} _ {t} \boldsymbol {W} ^ {v}, \boldsymbol {h} _ {t - 1, k}\right) & k \in \mathcal {F} _ {t}, \\ \boldsymbol {h} _ {t - 1, k} & k \notin \mathcal {F} _ {t}, \end{array} \right. +$$ + +- $\mathbf{a}_k = s_k \mathbf{z}_t \mathbf{W}^v \forall k \in \mathcal{F}_t$ (Scaled Dot Product Attention) + +Step 3: Activated specialists write in a shared workspace + +$\widetilde{Q} = M\widetilde{W}^q$ +- $\mathbf{R} = \left\lbrack {\mathbf{M};\mathbf{A}}\right\rbrack$ where $\mathbf{A}$ is the matrix whose rows are the ${\mathbf{a}}_{k}\forall k \in {\mathcal{F}}_{t}$ +- $M \gets \text{softmax}\left(\frac{\widetilde{Q}(R\widetilde{W}^e)^{\mathrm{T}}}{\sqrt{d_e}}\right) R\widetilde{W}^v$ + +Step 4: Broadcast of information from the shared workspace + +$\widehat{\pmb{q}}_k = \bar{\pmb{h}}_{t,k}\widetilde{\pmb{W}}^q\quad \forall k\in \{1,\dots ,n_s\}$ +$s_{k,j} = \mathrm{softmax}\left(\frac{\widehat{q}_k\widehat{\kappa}_j}{\sqrt{d_e}}\right)$ where $\widehat{\pmb{\kappa}}_j = (\pmb {m}_j\widehat{\pmb{W}}^e)^{\mathrm{T}}\quad \forall k\in \{1,\dots ,n_s\}$ $j\in \{1,\ldots ,n_m\}$ +- $\pmb{h}_{t,k} = \bar{\pmb{h}}_{t,k} + \sum_{j} s_{k,j} \widehat{\pmb{v}}_j$ where $\widehat{\pmb{v}}_j = \pmb{m}_j \widehat{\pmb{W}}^v \forall k \in \{1, \dots, n_s\}$ + +# B HYPERPARAMETERS + +Table 3 lists the different hyper-parameters. + +# Parameters in RIMs+SW: + +RIMs with shared workspace has three set of parameters: + +- Parameters corresponding to Input attention Parameters for the attention for the $k$ -th specialist $\theta_{k} = (W_{k}^{q}, W^{e}, W^{v})$ corresponding to query, keys, and values respectively. Each specialist has different query parameters but share the same keys and values (which are function of the input). In the table it corresponds to the inp keys, inp values, inp heads respectively. +- Writing in a shared workspace: Parameters corresponding to the writing in the memory. Here, we follow the similar mechanisms as in RMC(Santoro et al., 2018), where shared + +# Algorithm 2: Shared Workspace integration with TIMs + +Notation: Consider $\pmb{h}_l$ as the output of the $l^{th}$ transformer layer. Let sequence length of original input be $T$ and embedding dimension of transformer be $D$ . Let the transformer be composed of $n_b$ mechanisms and memory be denoted as a matrix $M$ with row wise compartmentalized memories, where $m_i$ refers to the state of slot $i$ (total number of slots is $n_m$ ). Consider $\pmb{h}_l^k = \pmb{h}_l[: (k - 1)D / n_b : kD / n_b]$ to be the hidden state of mechanism indexed $k$ at layer $l$ . + +Initialization: Convert the raw input $X\in \mathbb{R}^{T\times \text{vocab\_size}}$ to + +$\pmb{h}_0 = \text{position\_encoding} + \text{Embedding}(X)$ where $\pmb{h}_0 \in \mathbb{R}^{T \times D}$ . Initialize memory matrix $M$ which remains common for all layers in the transformer. + +Input to the layer $l$ : $h_{l-1}$ having shape $\mathbb{R}^{T \times D}$ + +Step 1: Mechanisms compete to be selected to update the workspace based on the input they receive from the previous layer + +$W^{c}\in \mathbb{R}^{D / n_{b}\times 1}$ +- $c_k = \pmb{h}_{l - 1}^k W_k^c \quad \forall k \in \{1, \dots, n_b\}$ +- $c = \text{softmax}(\text{concat}(c_1, \dots, c_{n_b}))$ , $c \in \mathbb{R}^{T \times n_b}$ +- For each time step $t$ in the original sequence of length $T$ , we use the soft score $c$ to select the top $n_{sel}$ mechanisms which would self-attend and write to the memory. Hence generating set $\mathcal{F}_t$ which stores the indices of $n_{sel}$ mechanisms for position $t \in \{1, 2, \dots, T\}$ . Also construct $c_k^* \in \mathbb{R}^{T \times D / n_b}$ where + +$$ +c _ {k} ^ {*} [ t,: ] = \left\{ \begin{array}{l l} c [ t ] [ k ] & k \in \mathcal {F} _ {t} , \\ 0 & k \notin \mathcal {F} _ {t} , \end{array} \right. +$$ + +Step 2: Selected mechanisms self-attend and update their hidden state + +- $residual_{k} = h_{l - 1}^{k}$ +- $\bar{\mathbf{h}}_l^k = c_k^* \odot \text{Self Attention}(\mathbf{h}_{l-1}^k) + \text{residual}_k$ $\forall k \in \{1, \dots, n_b\}$ + +Step 3: Selected mechanisms write on the shared workspace + +- Memory matrix $M$ was last modified by mechanisms of layer $l - 1$ +- Let $\pmb{a}_k = c_k^* \odot \tilde{\pmb{h}}_l^k$ and $\pmb{a} = \text{concat}(\pmb{a}_1, \dots, \pmb{a}_{n_b})$ . Absorb the first dimension (corresponding to position in the sequence) in the batch dimension by reshaping $\pmb{a}$ . Perform the same steps as in algorithm 1. +$\tilde{Q} = M\widetilde{W}^q$ +- $R = [M; A]$ where $A = aW^{v}$ +- $M \gets \text{softmax}\left(\frac{\widetilde{Q}(\widetilde{R\widetilde{W}}^e)^{\mathrm{T}}}{\sqrt{d_e}}\right) \widetilde{R\widetilde{W}}^v$ + +Step 4: Broadcast of information from the shared workspace + +- Reshape the new memory to bring back the sequence dimension. Perform the same steps as in algorithm 1. +$\hat{\pmb{q}}_k = \bar{h}_l^k\bar{W}^q\quad \forall k\in \{1,\dots ,n_b\}$ +$s_{k,j} = \mathrm{softmax}\left(\frac{\widehat{q}_k\widehat{\kappa}_j}{\sqrt{d_e}}\right)$ where $\widehat{\pmb{\kappa}}_j = (\pmb {m}_j\widehat{\pmb{W}}^e)^{\mathrm{T}}\quad \forall k\in \{1,\dots ,n_b\}$ $j\in \{1,\ldots ,n_m\}$ +- $\pmb{h}_l^k = \bar{\pmb{h}}_l^k + \sum_j s_{k,j} \hat{\pmb{v}}_j$ where $\hat{\pmb{v}}_j = \pmb{m}_j \widehat{\pmb{W}}^v \quad \forall k \in \{1, \dots, n_b\}$ . + +![](images/3473a91279f8c64de03771aee5d11ec1f0a7e0942e61c0e38ab715dac1a4bb93.jpg) +Equilateral Triangles + +![](images/c8383b1a5240773c1332ba41aa2ccee390aa17210e664c5cfaef407d91fa9b91.jpg) + +![](images/59a7b1211b578faae996f0603f1acaa7b9e181d16f1153f3cf1e4c2780e13aa7.jpg) + +![](images/3f16ec1436a7f0da41d5829835e382718f8a7c185c7fb63bc246135d1baaa713.jpg) +Non Equilateral Triangles + +![](images/54052356908bb1e0d8e1cd55bf008c3e151bd9601977871a8fa402c5e2abe2eb.jpg) +Figure 6: A demonstration of the detecting equilateral triangles task. + +![](images/bf56ebe2db39b2e835f0d5dd2fc797bf0925ebdd28c1bb0d6593cf969f3df1a2.jpg) + +
ParameterValue
Number of specialists (ns)6
Size of each specialist85
Number of memory slots (nm)
OptimizerAdam(Kingma and Ba, 2014)
learning rate1 · 10-4
batch size64
Inp keys64
Inp Values85
Inp Heads4
Inp Dropout0.1
Number of memory slots4
Number of memory heads1
Size of attention head32
Key size32
Number of MLP layers in Attention3
Gate Style‘unit’
Memory Attention Heads4
Memory Attention keys32
Memory Attention Values32
+ +Table 3: Generic Hyperparameters for the proposed model (for RIMs) + +workspace is seen as a Matrix with row wise compartmentalized memories (i.e slots) i.e $\widetilde{\mathbf{W}}^q, \widetilde{\mathbf{W}}^e, \widetilde{\mathbf{W}}^v$ . In the table it corresponds to number of memory slots, number of memory heads, size of attention head, key size and number of mlp layers in attention. These are the same hyper-parameter as in RMC (Santoro et al., 2018). We tried two different set of hyper-parameters (a) where we only have a single slot and (b) where we have 4 slots. + +- **Broadcast of Information from the shared workspace:** In this process, the information in the workspace gets broadcasted to all the specialists such that each specialist produces a query, and the keys and values are a function of the memory state. Each specialist gets information from the memory according to its query, and this information is used to update the state of each specialist in a residual fashion. This corresponds to the parameters of $\widehat{\mathbf{W}}^v$ , $\widehat{\mathbf{W}}^q$ , $\widehat{\mathbf{W}}^e$ in the table i.e. memory attention heads, memory attention keys, and memory attention values. We did not do any hyper-parameter search for these hyper-parameters. + +# Resources Used: + +- For vision tasks like Sort-of-clever, Equilateral triangle, CIFAR classification, it takes about 6 hours to run 200 epochs on V100 (32G) GPU. +- It takes about 2 days to train the proposed model on bouncing ball task for 100 epochs on V100 (32G) GPU. We did not do any hyper-parameter search specific to a particular dataset (i.e 4Balls or 678Balls or Curtain Task). We ran the proposed model for different number of memory slots (i.e 2/4/8) for all the different datasets. +- For Starcraft task, it takes about 5 days to train on V100 (16G) GPU with batch size of 4. + +# C IMPLEMENTATION DETAILS + +Writing Information in the shared workspace. While writing information to the shared workspace, we update the workspace using a gating mechanism as proposed in Santoro et al. (2018). The gating mechanism consists of input and forget gates. Let $M^{t-1}$ and $M^t$ be the previous and updated memory matrix respectively. Let $M$ be the result of the attention mechanism as described in step 2 of section 2.1. Let $X_{1\dots n_s}$ be the input to $n_s$ specialists. The gating mechanism can be formulated as follows. + +$$ +\bar {\boldsymbol {X}} = \frac {1}{n _ {s}} \sum_ {i = 1} ^ {n _ {s}} \operatorname {r e l u} \left(\boldsymbol {X} _ {i} \times \boldsymbol {W} ^ {1}\right) +$$ + +$$ +\boldsymbol {K} = \bar {\boldsymbol {X}} + \tanh (M ^ {t - 1}) +$$ + +$$ +\boldsymbol {I} = \operatorname {s i g m o i d} \left(\boldsymbol {K} \boldsymbol {W} ^ {I}\right) +$$ + +$$ +\boldsymbol {F} = \operatorname {s i g m o i d} \left(\boldsymbol {K} \boldsymbol {W} ^ {F}\right) +$$ + +$$ +\boldsymbol {M} ^ {t} = \boldsymbol {I} \times \tanh (\boldsymbol {M}) + \boldsymbol {F} \times \boldsymbol {M} ^ {t - 1} +$$ + +Here, $\pmb{I}$ and $\pmb{F}$ indicate the input and forget gates respectively. Note that $W^1$ is shared across all $n_s$ specialists. + +# D PROPERTIES OF SHARED WORKSPACE + +In section 2, we claim that higher-order interaction terms and effects due to persistence of memory are key contributors to Shared Workspace performance. We support those claims here: + +Shared Workspace vs repeated self attention Higher-order interaction can be simulated by repeating the self-attention step multiple times at the same layer/time-step. However, due to the absence of a global communication channel, there is no constraint that the messages passed among the neural modules should lie in the same representation space. We modify a standard transformer where we repeat the self-attention step two times in every layer. We expect that $2 \times$ Self Attention will perform worse than SW. We also run a model where both self-attention as well as shared workspace is used by the transformer to update its state. + +Persistence of Memory To check whether persistence is crucial for our model to perform well, we run a model where we re-initialize the shared workspace at every layer. Again we expect that removing memory persistence should result in a drop in performance and speed of convergence. + +We run these models on sort-of-clevr dataset and present the results in figure 7 + +We note that removing persistence of memory results in significantly slower convergence. Replacing SW with $2 \times \mathrm{SA}$ results in a significant drop in performance. + +Figure 7: Comparison on Sort-of-CLEVR relational reasoning. Speed of convergence for relational and non-relational questions in the sort-of-clevr dataset. We can see that the Shared Workspace model converges faster and generalizes better as compared to all the other models. Here SW refers to shared workspace, $2 \times \mathrm{SA}$ refers to applying self-attention twice in the same layer, $\mathrm{SW} + \mathrm{SA}$ refers using both Shared Workspace and Self Attention in each transformer layer. +![](images/04e7e262bebaf4a25e5a033e09381a70ea77efa2aed54063ea57cc36c389a64f.jpg) +TR + HSW SA + SW TR + HSW (w/o memory persistence) 2 x SA + +![](images/a51dbb229f777c051abfa64dc1db7517ee8768da0657a88760d8447c230fdcc1.jpg) + +![](images/80fd2c5c382eb4bca3a1bbc0a23030d78107898746d322277e39c516a940b913.jpg) +Figure 8: A sample from the sort-of-clevr dataset. + +# Relational questions: + +1. What is the shape of the object closest to the red object? $\Rightarrow$ square +2. What is the shape of the object furthest to the orange object? $\Rightarrow$ circle +3. How many objects have same shape with the blue object? $\Rightarrow 3$ + +# Non-relational questions: + +1. What is the shape of the red object? $\Rightarrow$ Circle +2. Is green object placed on the left side of the image? $\Rightarrow$ yes +3. Is orange object placed on the upside of the image? $\Rightarrow$ no + +# E TRANSFORMER TASKS + +# E.1 DETECTING EQUILATERAL TRIANGLES + +A demonstration of this task can be found in figure 6. We use images of size $64 \times 64$ for this task. Our training dataset consists of 50000 examples and we evaluate on 10000 examples. We follow the same setup as vision transformers Dosovitskiy et al. (2020) for this task. We divide the image into patches of size $4 \times 4$ , this sequence of patches is fed as input to a 4-layered transformer along with the CLS token which is used for classification. We set hidden dim to 256 and ffn dim to 512. For the proposed model (TR+SSW, TR+HSW), We use a query and key size of 32, and value size of 64. We use 4 heads during reading from and writing into the shared workspace which consist of 8 memory slots. For the baseline models (TR, TR + HC, STR), we use query, key and value size of 64 and 4 heads. For training, we use a batch size of 64. We train the model for 200 epochs using Adam optimizer with a learning rate of 0.0001. We anneal the learning rate using cosine annealing. + +# E.2 SORT-OF-CLEVR + +Figure 8 shows a sample from this dataset. The images in this dataset are of size $75 \times 75$ . Each question is encoded into 11 bits. The first 6 bits indicate color, the next 2 bits indicate question type (relational or non-relational), and the remaining 3 bits indicate question subtype (according to figure 8). We use a 4-layered transformer for this task with hidden dim set to 256 and ffn dim set to 512. For the proposed model (TR+SSW, TR+HSW), We use a query and key size of 32, and value size of 64. We use 4 heads during reading from and writing into the shared workspace which consists of 8 memory slots. For the baseline models (TR, TR + HC, STR), we use query, key and value size of 64 and 4 heads. We encode the 11 bit question into a 256 dimensional vector representation and concatenate it with the sequence of $15 \times 15$ sized patched obtained from the image. + +We use the representation corresponding to the CLS token for classification. We train the model using cross-entropy loss. We use a batch size of 64 and train the model for 100 epochs. We use Adam optimizer with a learning rate of 0.0001 for training. + +# E.3 CATER: OBJECT TRACKING + +Each CATER video consists of about 300 frames of size $224 \times 224$ . We first sample frames at a sampling rate of 6 which results in 50 frames. From these 50 frames, we stack 5 consecutive frames together and pass each stack through a 18 layered resnet. The corresponding sequence of 10 frames is passed as input to the transformer. This task is setup as a classification task where we have to predict which cell in the $6 \times 6$ grid contains the snitch in the final frame. We use a 6-layered transformer with hidden dim set to 512 and ffn dim set to 2048. For the proposed model (TR+SSW, TR+HSW), We use a query and key size of 32, and value size of 64. We use 8 heads during reading from and writing into the shared workspace which consists of 8 memory slots. For the baseline models (TR, TR + HC, STR), we use query, key and value size of 64 and 8 heads. + +# F RIMS TASKS + +# F.1 BOUNCING BALL + +The dataset consists of 50,000 training examples and 10,000 test examples showing $\sim 50$ frames of either 4 solid balls bouncing in a confined square geometry (4Balls), 6-8 balls bouncing in a confined geometry (678Balls), 3 balls bouncing in a confined geometry with an occluded region (Curtain), or balls of different colors (Colored 4Balls) and (Colored 678Balls). We train baselines as well as the proposed shared workspace extension (e.g., RIMs + SW). As shown in Fig. 9, we study the performance of the proposed model compared with LSTM, RIMs and RMC. The first 10 frames of ground truth are fed in and then the system is rolled out for the next 35 time steps. During the rollout phase, the proposed method performs better than the baselines in accurately predicting the dynamics of the balls as reflected by cross entropy (CE). + +We trained baselines as well as proposed model for about 100 epochs. We use the same architecture for encoder as well as decoder as in (Van Steenkiste et al., 2018). Hyper-parameters specific to the proposed architecture are listed in Tab. 3. + +# G INTEGRATING + +# SW WITH MORE ARCHITECTURES + +![](images/4e7ad46dfaa34f77d2b31f141e49c07f1af41cac565a35ca46b413efedf41fd2.jpg) +Figure 9: Bouncing ball motion: Prediction error comparison of the proposed method, LSTM, RIMs and RMC baseline. Given 10 frames of ground truth, the model predicts the rollout over the next 35 steps. Here, we present the BCE for the 30th frame and 45th frame. The proposed SW extension performs better than other baselines in accurately predicting the dynamics, with an increasing advantage as the number of unrolled steps (30 vs 45) and balls ((a) vs (b)) increases. Results are an average over 5 random seeds. + +# G.1 TIMs + +TIMs was proposed by Lamb et al. (2021). A transformer network is divided into 'independent mechanisms' which update their state via sharing information between positions and sharing information between mechanisms. The information sharing step between mechanisms can be replaced by SW to create TIMs+SW. + +# G.1.1 MULTIMNIST GENERATION + +In this task, we train an Image Transformer Parmar et al. (2018) (pixel-by-pixel, raster-order generative model) for next pixel prediction task on the "MultiMNIST dataset" + +
ParameterValue
Common Parameters
OptimizerAdam(Kingma and Ba, 2014)
Learning rate1 · 10-3
Batch size12
Number of attention heads8
TR
Size of transformer layer256
TIMs
Number of mechanisms4
Size of mechanism48
TIMs+SW
Number of mechanisms4
Size of mechanism40
Number of memory slots2
Size of memory slots160
Memory Attention Heads8
Gate Style‘unit’
Number of MLP layers in Attention2
+ +Table 4: Hyperparameters for MultiMNIST Task + +Each $32 \times 32$ image in this dataset is made up of four randomly selected (and augmented) MNIST digits (resized to $32 \times 8$ ) placed side-by-side as shown in figure 10. The digits themselves are selected independently of one-another. + +The main aim of creating such a task is to observe the working of independent mechanisms in architectures such as TIMs (Lamb et al., 2021). Each image in the MultiMNIST dataset can be broken down into different sets of independent spatial components. Since the digits which make up the image are independently selected, the joint distribution of pixel intensities in any one of the four sections of the image is statistically independent of the pixel intensities in any other section of the image. Moreover each section of the image can be further broken down into independent spatial components: one that pertains to the background and one that pertains to the foreground. + +It is expected that a monolithic architecture (having a single computational unit) would have to devote a significant portion of its training to learn the statistical independence between the different constituents of the image. On the other hand, architectures made up of sparsely interacting independent mechanisms have a natural way of capturing such statistical independence. A division of labour where each mechanism is focused on the generation of a distinct independent constituent of the image should allow for better generalization on the test set. Once the generation of a constituent is completed, the task can be handed over to some other mechanism based on current position in the image. + +For this experiment we train a standard transformer with shared parameters across all layers (denoted by TR), TIMs (Lamb et al., 2021) with 4 mechanisms, and a modified version of TIMs with 4 mechanisms where the pair-wise communication between the mechanisms is replaced by communication via a shared workspace (denoted by TIMs+SW). + +Training. We follow the minGPT Image Transformer setup Karpathy (2020) for our experiments. All three of the configurations have 8 layers, 8 heads for multi-headed attention and use the exact same parameter initialization and base architecture. We train all three of the models for 20 epochs. + +In the TR model, all of the 8 monolithic layers share the same set of parameters. In TIMs and TIMs+SW, the first two layers are the standard monolithic layers having shared parameters. The middle four layers in both of these architectures are modular layers with four mechanisms. These four + +
ModelLoss
TR0.000058
TIMs (4 mechanisms)0.000050
TIMs+SW (4 mechanisms)0.000042
+ +Table 5: MultiMNIST Generation Task: We report cross-entropy loss between the generated pixel values and the true pixel values on the test set of MultiMNIST Generation Task (smaller numbers are better) + +![](images/4a80d393cbb8caa5ffd4f818673ce1490f9f2b1699e8dc6c65ac524964a0bb80.jpg) +Figure 10: A randomly selected batch of 16 images from the MultiMNIST generation dataset (4 rows and 4 columns) + +layers share the same set of parameters. In the case of TIMs+SW, the four mechanisms in these layers communicate via a shared workspace (having 2 memory slots). This shared workspace is common for all four middles layers and is absent in TIMs where the mechanisms communicate via pair-wise competition as proposed in the original paper. TIMs and TIMs+SW architectures are concluded by two more monolithic layers which again share the same parameters. + +For all three models to have comparable number of parameters, we chose the transformer embedding dimension to be 256 for TR model, 192 for TIMs model and 160 for TIMs+SW model. In TIMs and TIMs+SW, the embedding dimension is divided equally among the four specialists. Each memory slot in the shared workspace of the TIMs+SW model has a 160 dimensional embedding and the model uses four heads to perform read and write operations on the shared workspace. Total number of parameters for all three architectures lie between 1M and 1.8M. + +Results. We observe the best cross-entropy loss in 20 epochs on the test set of the MultiMNIST dataset for the next pixel prediction task in the table 5. We further plot the sixth layer "mechanism activation score" of TIMs and TIMs+SW while generating the first four images of the test set in the best epoch (shown in figure 5). + +# G.1.2 USING WORKSPACE FOR LANGUAGE MODELLING + +We train our models on the WikiText-103 dataset by posing a language modeling problem. The dataset is divided into train, test and validation sets which are composed out of 28,475, 60 and 60 articles respectively. The total number of tokens in the train set is more than 103 million, hence the name of the dataset. This dataset retains numbers, punctuation and case. + +Training. We train our models for 15 epochs for the next word prediction task on the WikiText-103 dataset and report the perplexity on the validation set. We show the results using TIMs (Lamb et al., 2021) with 4 mechanisms and TIMs+SW with 4 mechanisms (where we replace the pairwise communication in TIMs with communication via a shared workspace like in the MultiMNIST experiment). We modify the FAIRSEQ Ott et al. (2019) transformer language model class for all of our experiments. + +For TIMs+SW, we train and test two different variants: TIMs+SSW uses soft attention to generate the activation scores of competing independent mechanisms whereas TIMs+HSW uses top-k attention with $k = 2$ . + +Since in this test, our aim is to compare the performance of the two models for the language modeling task, the architectures are only made up of a transformer decoder. In both of the models, there are 8 transformer decoder layers divided into 3 sets. The first 2 layers are standard monolithic decoder layers which share the same parameters. The next 4 layers are modular layers (TIMs layers or TIMs+SW layers depending on the model choice). These layers also share the same parameters among themselves. The last 2 layers are again standard monolithic decoder layers, both sharing the same parameters. + +The inputs to the network are 1024 dimensional word embeddings, input to a transformer layer of dimension 1024 and feed forward dimension of 2048. + +Both of the networks have 8 attention heads with head dimension of 128. The total transformer layer size of $8 \times 128 = 1024$ is equally divided among the four mechanisms. In the case of TIMs, these mechanisms (in layers 3,4,5) interact via pair-wise communication, whereas in TIMs+SSW and TIMs+HSW, these mechanisms interact via a shared workspace. The shared workspace has 2 memory slots, each 1024 dimensional, having 4 attention heads for reading and writing. + +
ParameterValue
Common Parameters
OptimizerAdam(Kingma and Ba, 2014)
Learning rate5 · 10-4
Adam betas0.99, 0.98
Weight decay0.01
lr scheduler‘inverse square root’
Max tokens pergpu3078
Batch size multiple8
Number of attention heads8
Transformer layer size1024
Number of Mechanisms4
Update frequency4
Number of warmup updates4000
Starting Warmup lr1 · 10-7
TIMs+SSW
Number of memory slots2
Size of memory slots1024
Memory Attention Heads4
Gate Style‘unit’
Number of MLP layers in Attention3
top-k competitionFalse
TIMs+HSW
Number of memory slots2
Size of memory slots1024
Memory Attention Heads4
Gate Style‘unit’
Number of MLP layers in Attention3
top-k competitionTrue, k=2
+ +Table 6: Hyperparameters for WikiText-103 Language Modeling Task + +Results. We plot the perplexity (per epoch) on the validation set. All models have comparable number of parameters (within a $10\%$ difference). We note that TIMs performs poorly on this dataset but adding shared workspace improves the performance consistently. We also note that sparsity indeed helps as TIMs+HSW performed the best. + +![](images/84400554d1d852b07efe3f6044998a73f74fc7b0cef5c702b7f9b6dfeece8a3a.jpg) +Figure 11: Per epoch validation perplexity for TIMs, TIMs+SSW, TIMs+HSW for wikitext-103 language modeling task \ No newline at end of file diff --git a/coordinationamongneuralmodulesthroughasharedglobalworkspace/images.zip b/coordinationamongneuralmodulesthroughasharedglobalworkspace/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e7e2b014e75a3d9bb9eed5c822a631d96f6c4769 --- /dev/null +++ b/coordinationamongneuralmodulesthroughasharedglobalworkspace/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:834be93aca5a1e0212c319da439d11fe3a80018215b7b62e3cf33ac7bde8a978 +size 491815 diff --git a/coordinationamongneuralmodulesthroughasharedglobalworkspace/layout.json b/coordinationamongneuralmodulesthroughasharedglobalworkspace/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..76b5e392028681b7b693d8ea7627f80b0fa42ed6 --- /dev/null +++ b/coordinationamongneuralmodulesthroughasharedglobalworkspace/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f88daa22cb99fb0853d4c55b37bd402c079ca1259efc78458640030adb7732e +size 724371 diff --git a/cyclemlpamlplikearchitecturefordenseprediction/cf1ce066-1001-4612-99b8-b8e24a4ebb57_content_list.json b/cyclemlpamlplikearchitecturefordenseprediction/cf1ce066-1001-4612-99b8-b8e24a4ebb57_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d77fdadfbf215536fa7c2c18ec4afb752f36fa02 --- /dev/null +++ b/cyclemlpamlplikearchitecturefordenseprediction/cf1ce066-1001-4612-99b8-b8e24a4ebb57_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a42ae0468b0dc513b30534beab7bdf832d16ac8ce33afa0dacaef9ea164a9595 +size 128643 diff --git a/cyclemlpamlplikearchitecturefordenseprediction/cf1ce066-1001-4612-99b8-b8e24a4ebb57_model.json b/cyclemlpamlplikearchitecturefordenseprediction/cf1ce066-1001-4612-99b8-b8e24a4ebb57_model.json new file mode 100644 index 0000000000000000000000000000000000000000..156616f435406d5732cbd631f11a2146e25e445b --- /dev/null +++ b/cyclemlpamlplikearchitecturefordenseprediction/cf1ce066-1001-4612-99b8-b8e24a4ebb57_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d875b8f15ac15a6478666f13466cacd9e01eb7a7fa83df2100d8c5ce399bbd2d +size 154787 diff --git a/cyclemlpamlplikearchitecturefordenseprediction/cf1ce066-1001-4612-99b8-b8e24a4ebb57_origin.pdf b/cyclemlpamlplikearchitecturefordenseprediction/cf1ce066-1001-4612-99b8-b8e24a4ebb57_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..211ee348f83d60dac01224852ea7056bd9583aa3 --- /dev/null +++ b/cyclemlpamlplikearchitecturefordenseprediction/cf1ce066-1001-4612-99b8-b8e24a4ebb57_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7b89ac45df09be5ffccfcd534d0b8bd96a02b97ca7cca3a4d0d9ac7c057d9259 +size 1710975 diff --git a/cyclemlpamlplikearchitecturefordenseprediction/full.md b/cyclemlpamlplikearchitecturefordenseprediction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1394733601dffc15f12e79cbc233632b8973ec89 --- /dev/null +++ b/cyclemlpamlplikearchitecturefordenseprediction/full.md @@ -0,0 +1,432 @@ +# CYCLEMLP: A MLP-LIKE ARCHITECTURE FOR DENSE PREDICTION + +Shoufa Chen $^{1}$ Enze Xie $^{1}$ Chongjian Ge $^{1}$ Runjian Chen $^{1}$ Ding Liang $^{2}$ Ping Luo $^{1,3}$ + +1 The University of Hong Kong 2 SenseTime Research + +$^{3}$ Shanghai AI Laboratory, Shanghai, China +{shoufach, xieenze, rhettgee, rjchen}@connect.hku.hk +liangding@sensetime.com pluo@cs.hku.hk + +# ABSTRACT + +This paper presents a simple MLP-like architecture, CycleMLP, which is a versatile backbone for visual recognition and dense predictions. As compared to modern MLP architectures, e.g., MLP-Mixer (Tolstikhin et al., 2021), ResMLP (Touvron et al., 2021a), and gMLP (Liu et al., 2021a), whose architectures are correlated to image size and thus are infeasible in object detection and segmentation, CycleMLP has two advantages compared to modern approaches. (1) It can cope with various image sizes. (2) It achieves linear computational complexity to image size by using local windows. In contrast, previous MLPs have $O(N^2)$ computations due to fully spatial connections. We build a family of models which surpass existing MLPs and even state-of-the-art Transformer-based models, e.g. Swin Transformer (Liu et al., 2021b), while using fewer parameters and FLOPs. We expand the MLP-like models' applicability, making them a versatile backbone for dense prediction tasks. CycleMLP achieves competitive results on object detection, instance segmentation, and semantic segmentation. In particular, CycleMLP-Tiny outperforms Swin-Tiny by $1.3\%$ mIoU on ADE20K dataset with fewer FLOPs. Moreover, CycleMLP also shows excellent zero-shot robustness on ImageNet-C dataset. Code is available at https://github.com/ShoufaChen/CycleMLP. + +# 1 INTRODUCTION + +Vision models in computer vision have been long dominated by convolutional neural networks (CNNs) (Krizhevsky et al., 2012; He et al., 2016). Recently, inspired by the successes in Natural Language Processing (NLP) field, Transformers (Vaswani et al., 2017) are adopted into the computer vision community. Built with self-attention layers, multi-layer perceptrons (MLPs), and skip connections, Transformers make numerous breakthroughs on visual tasks (Dosovitskiy et al., 2020; Liu et al., 2021b). More recently, (Tolstikhin et al., 2021; Liu et al., 2021a) have validated that building models solely on MLPs and skip connections without the self-attention layers can achieve surprisingly promising results on ImageNet (Deng et al., 2009) classification. + +Despite promising results on visual recognition tasks, these MLP-like models can not be used in dense prediction tasks (e.g., object detection and semantic segmentation) due to the three challenges: (1) Current models are composed of blocks with non-hierarchical architectures, which make the model infeasible to provide pyramid and high-resolution feature + +representations. (2) Current models cannot deal with flexible input scales due to the Spatial FC as shown in Figure 1b. The spatial FC is configured by an image-size related weight1. Thus, this + +
FCStepsizeO(HW)Scale VariableImgNet Top-1COCO APADE20K mIoU
Channel1HW79.435.036.3
Spatial-H²W²X80.9XX
Cycle7HW81.641.742.4
+ +Table 1: Comparison of three types of FC operators. + +![](images/6a73e71a59fd5cdd0e70a24cb7db91d9c98201f8a100d89c4c7b2f7f7a73e838.jpg) + +![](images/d63f32ffa8cc9769186725f3ef25be53af31f78211289f7e7c955fff1220fcf9.jpg) + +![](images/05320cbfa355f0aa483ba6f13edc42e7da7305c16bd806f98af386ac9310c6bd.jpg) +Figure 1: (a)-(c): motivation of Cycle Fully-Connected Layer (Cycle FC) compared to Channel FC and Spatial FC. (a) Channel FC aggregates features in the channel dimension with spatial size '1'. It can handle various input scales but cannot learn spatial context. (b) Spatial FC (Tolstikhin et al., 2021; Touvron et al., 2021a; Liu et al., 2021a) has a global receptive field in the spatial dimension. However, its parameter size is fixed and it has quadratic computational complexity to image scale. (c) Our proposed Cycle Fully-Connected Layer (Cycle FC) has linear complexity the same as channel FC and a larger receptive field than Channel FC. (d)-(f): Three examples of different step sizes. Orange blocks denote the sampled positions. $\star$ denotes the output position. For simplicity, we omit batch dimension and set the feature's width to 1 here for example. Several more general cases can be found in Figure 7 (Appendix G). Best viewed in color. + +![](images/bf89444f7aeb3c691ed53bcdcb0fa8ce63c2a97c84301e8c0661610e1f6b6a95.jpg) + +![](images/b75dc712e224b2b1e0c23df4582bd7260d839c219b592f0a14ab878332ad8b8c.jpg) + +![](images/25c8a523af5fbc8cae8fa42857b01dd836c3995d236d30d74ad96cc1500f025a.jpg) + +structure typically requires the input image with a fixed scale during both the training and inference procedure. It contradicts the requirements of dense prediction tasks, which usually adopt a multi-scale training strategy (Carion et al., 2020) and different input resolutions in training and inference stages (Lin et al., 2014; Cordts et al., 2016). (3) The computational and memory costs of the current MLP models are quadratic to input image sizes for dense prediction tasks (e.g., COCO benchmark (Lin et al., 2014)). + +To address the first challenge, we construct a hierarchical architecture to generate pyramid features. For the second and third issues, we propose a novel variant of fully connected layer, named as Cycle Fully-Connected Layer (Cycle FC), as illustrated in Figure 1c. The Cycle FC is capable of dealing with various image scales and has linear computational complexity to image size. + +Our Cycle FC is inspired by Channel FC layer illustrated in Figure 1a, which is designed for channel information communication (Lin et al., 2013; Szegedy et al., 2015; He et al., 2016; Howard et al., 2017). The main merit of Channel FC lies in that it can deal with flexible image sizes since it is configured by image-size agnostic weight of $C_{in}$ and $C_{out}$ . However, the Channel FC is infeasible to aggregate spatial context information due to its limited receptive field. + +Our Cycle FC is designed to enjoy Channel FC's merit of taking input with arbitrary resolution and linear computational complexity while enlarging its receptive field for context aggregation. Specifically, Cycle FC samples points in a cyclical style along the channel dimension (Figure 1c). In this way, Cycle FC has the same complexity (both the number of parameters and FLOPs) as channel FC while increasing the receptive field simultaneously. To this end, we adopt Cycle FC to replace the Spatial FC for spatial context aggregation (i.e., token mixing) and build a family of MLP-like models for both recognition and dense prediction tasks. + +The contributions of this paper are as follows: (1) We propose a new MLP-like operator, Cycle FC, which is computational friendly to cope with flexible input resolutions. (2) We take the first attempt to build a family of hierarchical MLP-like architectures (CycleMLP) based on Cycle FC operator for dense prediction tasks. (3) Extensive experiments on various tasks (e.g., ImageNet classification, COCO object instance detection, and segmentation, and ADE20K semantic segmentation) demonstrate that CycleMLP outperforms existing MLP-like models and is comparable to and sometimes better than CNNs and Transformers on dense predictions. + +![](images/445c498c727ff681c9b54ed7b7c3f1e9bac28e2e7f3d0ed6296dbd72754d3f50.jpg) +Figure 2: ImageNet accuracy v.s. model capacity. All models are trained on ImageNet-1K (Deng et al., 2009) without extra data. CycleMLP surpasses existing MLP-like models such as MLP-Mixer (Tolstikhin et al., 2021), ResMLP (Touvron et al., 2021a), gMLP (Liu et al., 2021a), $S^2$ -MLP (Yu et al., 2021) and ViP (Hou et al., 2021). + +Related Work. Convolution Neural Networks (CNNs) has dominated the visual backbones for several years (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2016). (Dosovitskiy et al., 2020) introduced the first pure Transformer-based (Vaswani et al., 2017) model into computer vision and achieved promising performance, especially pre-trained on the large scale JFT dataset. Recently, some works (Tolstikhin et al., 2021; Touvron et al., 2021a; Liu et al., 2021a) removed the attention in Transformer and proposed pure MLP-based models. Please see Appendix A for a comprehensive review of the literature on the visual backbones. + +# 2 METHOD + +In this section, we introduce CycleMLP models for vision tasks including recognition and dense predictions. To begin with, in Sec. 2.1 we formulate our proposed novel operator, Cycle FC, which serves as a basic component for building CycleMLP models. Then we compare Cycle FC with Channel FC and multi-head attention adopted in recent Transformer-based models (Dosovitskiy et al., 2020; Touvron et al., 2020; Liu et al., 2021b) in Sec. 2.2. Finally, we present the detailed configurations of CycleMLP models in Sec. 2.3. + +# 2.1 CYCLE FULLY-CONNECTED LAYER + +Notation. We denote an input feature map as $\mathbf{X} \in \mathbb{R}^{H \times W \times C_{in}}$ , where $H, W$ denote the height and width of the image and $C_{in}$ is the number of feature channels. We use subscripts to index the feature map. For example, $\mathbf{X}_{i,j,c}$ is the value of $c^{th}$ channel at the spatial position $(i,j)$ and $\mathbf{X}_{i,j,:}$ are values of all channels at the spatial $(i,j)$ . + +The motivation behind Cycle FC is to enlarge receptive field of MLP-like models to cope with downstream dense prediction tasks while maintaining the computational efficiency. As illustrated in Figure 1a, Channel FC applies weighting matrix on $\mathbf{X}$ along the channel dimension on fixed position $(i,j)$ . However, Cycle FC introduces a receptive field of $(S_H,S_W)$ , where $S_{H}$ and $S_{W}$ are stepsize along with the height and width dimension respectively (illustrated in Figure 1 (d)). The basic Cycle FC operator can be formulated as below: + +$$ +\operatorname {C y c l e F C} (\boldsymbol {X}) _ {i, j,:} = \sum_ {c = 0} ^ {C _ {i n}} \boldsymbol {X} _ {i + \delta_ {i} (c), j + \delta_ {j} (c), c} \cdot \boldsymbol {W} _ {c,:} ^ {\mathrm {m l p}} + \boldsymbol {b} \tag {1} +$$ + +where $\pmb{W}^{\mathrm{mlp}}\in \mathbb{R}^{C_{in}\times C_{out}}$ and $\pmb {b}\in \mathbb{R}^{C_{out}}$ are parameters of Cycle FC. $\delta_i(c)$ and $\delta_j(c)$ are the spatial offset of the two axis on the $c^{th}$ channel, which are defined as below: + +$$ +\delta_ {i} (c) = \left(c \bmod S _ {H}\right) - 1, \quad \delta_ {j} (c) = \left(\left\lfloor \frac {c}{S _ {H}} \right\rfloor \bmod S _ {W}\right) - 1 \tag {2} +$$ + +Examples. We provide several examples (Figure 1 (d)-(f)) to illustrate the stepsize. For the sake of visualization convenience, we set the tensor's $W = 1$ . Thus, these three examples naturally all have $S_W = 1$ . Figure 1 (d) illustrates the offsets along two axis when $S_H = 3$ , that is $\delta_j(c) \equiv 0$ and $\delta_i(c) = \{-1,0,1,-1,0,1,\dots\}$ when $c = 0,1,2,\dots,8$ . Figure 1 (e) shows that when $S_H = H$ , Cycle FC has a global receptive field. Figure 1 (f) shows that when $S_H = 1$ , there will be no offset + +along either axis and thus Cycle FC degrades to Channel FC (Figure 1 (a)). We also provide a more general case where $W \neq 1$ and $S_{H} = 3$ , $S_{W} = 3$ in Figure 7 (Appendix). + +The offsets $\delta_{i}(c)$ and $\delta_j(c)$ enlarge the receptive field of Cycle FC as compared to Channel FC (Figure 1a), which applies weights solely on the same spatial position for all channels. The larger receptive field in return brings improvements on dense prediction tasks like semantic segmentation and object detection as shown in Table 1. Meanwhile, Cycle FC still maintains computational efficiency and flexibility on input resolution. Both the FLOPs and the number of parameters are linear to the spatial scale which are exactly the same as those of Channel FC. In contrast, although Spatial FC has a global receptive field over the whole spatial space, its computational cost is quadratic to the image scale. Besides, it fails to handle inputs with different resolutions. + +# 2.2 COMPARISON BETWEEN MULTI-HEAD SELF-ATTENTION (MHSA) AND CYCLE FC + +Inspired by Cordonnier et al. (2020), when re-parametried properly, a multi-head self-attention layer with $N_{h}$ heads can be formulated as below, which is similar to a convolution with kernel size $\sqrt{N_h} \times \sqrt{N_h}$ . (Please refer to Appendix C for detailed derivation) + +$$ +\operatorname {M H S A} (\boldsymbol {X}) _ {i, j,:} = \sum_ {h \in \{1, 2, \dots , N _ {h} \}} \boldsymbol {X} _ {i + \Delta_ {i} (h), j + \Delta_ {j} (h),:} \boldsymbol {W} ^ {\mathrm {m h s a}, h} + \boldsymbol {b} \tag {3} +$$ + +where $W^{\mathrm{mhsa},h} \in \mathbb{R}^{C_{in} \times C_{out}}$ is the parameter matrix for $h^{th}$ head in MHSA. $b \in \mathbb{R}^{C_{out}}$ is the bias vector. $\{\Delta_i(h), \Delta_j(h)\} = \{(0,0), (1,0), (-1,0), \dots\}$ contains all possible positional shift in convolution with kernel size $\sqrt{N_h} \times \sqrt{N_h}$ . Further, we stack all $W^{\mathrm{mhsa},h}$ together and reshape it into $W^{\mathrm{mhsa}} \in \mathbb{R}^{K \times K \times C_{in} \times C_{out}}$ . Then a relationship between $W^{\mathrm{mlp}}$ and $W^{\mathrm{mhsa}}$ can be formulated as follow. + +$$ +\boldsymbol {W} _ {c,:} ^ {\mathrm {m l p}} = \boldsymbol {W} _ {\delta_ {i} (c) + 1, \delta_ {j} (c) + 1, c,:} ^ {\mathrm {m h s a}} \tag {4} +$$ + +equation 4 shows that only the weights of $W^{\mathrm{mhsa}}$ on spatial shift $(\delta_i(c) + 1, \delta_j(c) + 1)$ are taken into account in $W^{\mathrm{mlp}}$ . This indicates that Cycle FC introduces an inductive bias that the weighting matrix in MHSA should be sparse. Thus Cycle FC inherits the large receptive field introduced in MHSA. The receptive field in Cycle FC is enlarged to $(S_H, S_W)$ , which enables Cycle FC to tackle with downstream dense prediction tasks better. Meanwhile, with the sparsity inductive bias, Cycle FC maintains computational efficiency in MLP-based methods as compared to convolution and multi-head self-attention. The parameter size in Cycle FC is $C_{in} \times C_{out}$ while $W^{\mathrm{mhsa}} \in \mathbb{R}^{K \times K \times C_{in} \times C_{out}}$ . + +# 2.3 OVERALL ARCHITECTURE + +Patch Embedding. Given the raw input image with the size of $H \times W \times 3$ , our model first splits it into patches by a patch embedding module (Dosovitskiy et al., 2020). Each patch is then treated as a "token". Specifically, we follow (Fan et al., 2021; Wang et al., 2021a) to adopt an overlapping patch embedding module with the window size 7 and stride 4. These raw patches are further projected to a higher dimension (denoted as $C$ ) by a linear embedding layer. Therefore, the overall patch embedding module generates the features with the shape of $\frac{H}{4} \times \frac{W}{4} \times C$ . + +CycleMLP Block. Then, we sequentially apply several Cycle FC Bloc blocks. Comparing with the previous MLP blocks (Tolstikhin et al., 2021; Touvron et al., 2021a; Liu et al., 2021a) visualized in Figure 5 (Appendix), the key difference of Cycle FC block is that it utilizes our proposed Cycle Fully-Connected Layer (Cycle FC) for spatial projection and advances the models in context aggregation and information communication. Specifically, the Cycle FC block consists of three parallel Cycle FCs, which have stepsizes $S_{H} \times S_{W}$ of $1 \times 7$ , $7 \times 1$ , and $1 \times 1$ . This design is inspired by the factorization of convolution (Szegedy et al., 2016) and criss-cross attention (Huang et al., 2019). Then, there is a channel-MLP with two linear layers and a GELU (Hendrycks & Gimpel, 2016) non-linearity in between. A LayerNorm (LN) (Ba et al., 2016) layer is applied before both parallel Cycle FC layers and channel-MLP modules. A residual connection (He et al., 2016) is applied after each module. + +
ModelParamFLOPsTop-1
EAMLPL-1430M-78.9
EAMLPL-1955M-79.4
Mixer-B/1659M12.7G76.4
Mixer-B/16†59M12.7G77.3
ResMLP-S1215M3.0G76.6
ResMLP-S2430M6.0G79.4
ResMLP-B24116M23.0G81.0
gMLP-Ti6M1.4G72.3
gMLP-S20M4.5G79.6
gMLP-B73M15.8G81.6
S2-MLP-wide71M14.0G80.0
S2-MLP-deep51M10.5G80.7
ViP-Small/725M6.9G81.5
ViP-Medium/755M16.3G82.7
ViP-Large/788M24.4G83.2
AS-MLP-T28M4.4G81.3
AS-MLP-S50M8.5G83.1
AS-MLP-B88M15.2G83.3
CycleMLP-B115M2.1G79.1
CycleMLP-B227M3.9G81.6
CycleMLP-B338M6.9G82.6
CycleMLP-B452M10.1G83.0
CycleMLP-B576M12.3G83.1
CycleMLP-T28M4.4G81.3
CycleMLP-S50M8.5G82.9
CycleMLP-B88M15.2G83.4
+ +Table 2: ImageNet-1K classification for MLP-like models. + +
ModelFamilyScaleParamFLOPsTop-1
ResNet18CNN224212M1.8G69.8
EffNet-B3CNN300212M1.8G81.6
GFNet-H-TiFFT224215M2.0G80.1
CycleMLP-B1MLP224215M2.1G78.9
ResNet50CNN224226M4.1G78.5
DeiT-STrans224222M4.6G79.8
BoT-S1-50Hybrid224221M4.3G79.1
PVT-STrans224225M3.8G79.8
Swin-TTrans224229M4.5G81.3
GFNet-H-SFFT224232M4.5G81.5
CycleMLP-B2MLP224227M3.9G81.6
ResNet101CNN224245M7.9G79.8
RegNetY-8GCNN224239M8.0G81.7
BoT-S1-59Hybrid224234M7.3G81.7
PVT-MTrans224244M6.7G81.2
CycleMLP-B3MLP224238M6.9G82.4
GFNet-H-BFFT224254M8.4G82.9
Swin-STrans224250M8.7G83.0
PVT-LTrans224261M9.8G81.7
CycleMLP-SMLP224250M8.5G82.9
ViT-B/16Trans384286M55.4G77.9
DeiT-BTrans224286M17.5G81.8
DeiT-BTrans384286M55.4G83.1
Swin-BTrans224288M15.4G83.3
CycleMLP-BMLP224288M15.2G83.4
+ +Table 3: Comparison with SOTA models on ImageNet-1K without extra data. + +Stage. The blocks with the same architecture are stacked to form one Stage (He et al., 2016). The number of tokens (feature scale) is maintained within each stage. At each stage transition, the channel capacity of the processed tokens is expanded while the number of tokens is reduced. This strategy effectively reduces the spatial resolution complexity. Overall, each of our model variants has four stages, and the output feature at the last stage has a shape of $\frac{H}{32} \times \frac{W}{32} \times C_4$ . These stage settings are widely utilized in both CNN (Simonyan & Zisserman, 2014; He et al., 2016) and Transformer (Wang et al., 2021b; Liu et al., 2021b) models. Therefore, CycleMLP can conveniently serve as a general-purpose visual backbone and a generic replacement for existing backbones. + +Model Variants. The design principle of the model's macro structure is mainly inspired by the philosophy of hierarchical Transformer (Wang et al., 2021b; Liu et al., 2021b) models, which reduce the number of tokens at the transition layers as the network goes deeper and increase the channel dimension. In this way, we can build a hierarchical architecture that is critical for dense prediction tasks (Lin et al., 2014; Zhou et al., 2017). Specifically, we build two model zoos following two widely used Transformer architectures, PVT (Wang et al., 2021b) and Swin (Liu et al., 2021b). Models in PVT-style are named from CycleMLP-B1 to CycleMLP-B5 and in Swin-Style are named as CycleMLP-T, -S, and -B, which represent models in tiny, small, and base sizes. These models are built by adapting several architecture-related hyper-parameters, including $S_{i}$ , $C_{i}$ , $E_{i}$ , and $L_{i}$ which represent the stride of the transition, the token channel dimension, the number of blocks, and the expansion ratio respectively at Stage $i$ . Detailed configurations of these models are in Table 11 (Appendix). + +# 3 EXPERIMENTS + +In this section, we first examine CycleMLP by conducting experiments on ImageNet-1K (Deng et al., 2009) image classification. Then, we present a bunch of baseline models achieved by CycleMLP in dense prediction tasks, i.e., COCO (Lin et al., 2014) object detection, instance segmentation, and ADE20K (Zhou et al., 2017) semantic segmentation. + +
1 × 77 × 11 × 1ParamsFLOPsTop-1 Acc
80.5
24.5M3.6G80.4
81.3
26.8M3.9G80.6
80.5
26.8M3.9G81.6
+ +Table 4: Ablation on three parallel branches. We adopt CycleMLP-B2 variant for this ablation study. Double check marks (✓) denote two same branches. + +
StepsizeImgNet Top-1ADE20K mIoU
381.642.4
581.6 (+0.0)43.2 (+0.8)
781.6 (+0.0)43.9 (+1.5)
981.5 (-0.1)43.2 (+0.8)
+ +Table 5: Stepsize ablation: CycleMLP achieves the highest mIoU on ADE20K when stepsize is 7. However, the stepsize has negligible influence on the ImageNet classification. + +# 3.1 IMAGENET-1K CLASSIFICATION + +The experimental settings for ImageNet classification are mostly from DeiT (Touvron et al., 2020), Swin (Liu et al., 2021b). The detailed experimental settings for ImageNet classification can be found in Appendix E.1. + +Comparison with MLP-like Models. We first compare CycleMLP with existing MLP-like models and the results are summarized in Table 2 and Figure 2. The accuracy-FLOPs tradeoff of CycleMLP consistently outperforms existing MLP-like models (Tolstikhin et al., 2021; Touvron et al., 2021a; Liu et al., 2021a; Guo et al., 2021; Yu et al., 2021; Hou et al., 2021) under a wide range of FLOPs, which we attribute to the effectiveness of our Cycle FC. Specifically, compared with one of the pioneering MLP work, i.e., gMLP (Liu et al., 2021a), CycleMLP-B2 achieves the same top-1 accuracy $(81.6\%)$ as gMLP-B while reducing more than $3\times$ FLOPs (3.9G for CycleMLP-B2 and 15.8G for gMLP-B). Furthermore, compared with existing SOTA MLP-like model, i.e., ViP (Hou et al., 2021), our model CycleMLP-B utilizes less FLOPs (15.2G) than ViP-Large/7 (24.4G, the largest one of ViP family) while achieving higher top-1 accuracy. + +It is noted that all previous MLP-like models listed in Table 2 do not conduct experiments on dense prediction tasks due to the incapability of dealing with variable input scales, which is discussed in Sec. 1. However, CycleMLP solved this issue by adopting Cycle FC. The experimental results on dense prediction tasks are presented in Sec. 3.3 and Sec. 3.4. + +Comparison with SOTA Models. Table 3 further compares CycleMLP with previous state-of-the-art CNN, Transformer and Hybrid architectures. It is interesting to see that CycleMLP models achieve comparable performance to Swin Transformer (Liu et al., 2021b), which is the state-of-the-art Transformer-based model. Specifically, CycleMLP-B achieves slightly better top-1 accuracy $(83.4\%)$ than Swin-B $(83.3\%)$ with similar parameters and FLOPs. GFNet (Rao et al., 2021) utilizes the fast Fourier transform (FFT) (Cooley & Tukey, 1965) to learn spatial information and achieves similar performance as CycleMLP on ImageNet-1K classification. However, the architecture of GFNet is correlated with the input resolution, and extra operation (parameter interpolation) is required when input scale changes, which may hurt the performance of dense predictions. We will thoroughly compare CycleMLP with GFNet in Sec. 3.4 on ADE20K. + +# 3.2 ABLATION STUDY + +In this subsection, we conduct extensive ablation studies to analyze each component of our design. Unless otherwise stated, We adopt CycleMLP-B2 instantiation in this subsection. + +Cycle Fully-Connected Layer. To demonstrate the advantage of the Cycle FC, we compare CycleMLP-B2 with two other baseline models equipped with channel FC and Spatial FC as spatial context aggregation operators, respectively. The differences of these operators are visualized in Figure 1, and the comparison results are shown in Table 1. CycleMLP-B2 outperforms the counterparts built on both Spatial and Channel FC for ImageNet classification, COCO object detection, instance segmentation, and ADE20K semantic segmentation. The results validate that Cycle FC is capable of serving as a general-purpose, plug-and-play operator for spatial information communication and context aggregation. + +![](images/f16e155fa136627bb84b255c09831b975590c9efaa8826f879ddccc01b21ae94.jpg) + +![](images/8de9a5383ae5f3c120386b09804a1fa42eebf9ade34b60a2d1aaf1e352509237.jpg) +Figure 3: Resolution adaptability. All models are trained on $224 \times 224$ and evaluated on various resolutions without fine-tuning. Left: Absolute top-1 accuracy; Right: Accuracy difference relative to that tested on $224 \times 224$ . The superiority of CycleMLP's robustness becomes more significant when scale varies to a greater extent. + +Table 4 further details the ablation study on the structure of CycleMLP block. It is observed that the top-1 accuracy drops significantly after removing one of the three parallel branches, especially when discarding the $1 \times 7$ or $7 \times 1$ branch. To eliminate the probability that the fewer parameters and FLOPs cause the performance drop, we further use two same branches (denoted as “ $\checkmark$ ” in Table 4) and one $1 \times 1$ branch to align the parameters and FLOPs. The accuracy still drops relative to CycleMLP, which further demonstrates the necessity of these three unique branches. + +Resolution adaptability. One remarkable advantage of CycleMLP is that it can take arbitrary-resolution images as input without any modification. On the contrary, GFNet (Rao et al., 2021) needs to interpolate the learnable parameters on the fly when the input scale is different from the one for training. We compare the resolution adaptability by directly evaluating models at a broad spectrum of resolutions using the weight pre-trained on $224 \times 224$ , without fine-tuning. Figure 3 (left) shows that the absolute Top-1 accuracy on ImagNet and Figure 3 (right) shows the accuracy differences between one specific resolution and the resolution of $224 \times 224$ . Compared with DeiT and GFNet, CycleMLP is more robust when resolution varies. In particular, at the $128 \times 128$ , CycleMLP saves more than 2 points drop compared to GFNet. Furthermore, at higher resolution, the performance drop of CycleMLP is less than GFNet. Note that the superiority of CycleMLP becomes more significant when the resolution changes to a greater extent. + +# 3.3 OBJECT DETECTION AND INSTANCE SEGMENTATION + +Settings. We conduct object detection and instance segmentation experiments on COCO (Lin et al., 2014) dataset. We first follow the experimental settings of PVT (Wang et al., 2021b), which are introduced in Appendix. E.2. The corresponding results are presented in Table 6. Then, in order to compare fairly with Swin Transformer, which adopts a different experimental recipe with PVT, we further follow the experimental settings of Swin with our CycleMLP-S model and the results are presented in Table 7. + +
BackboneRetinaNet 1×Mask R-CNN 1×
ParamAPAP50AP75APSAPMAPLParamAPbAP50AP75APmAP50APm75
ResNet1821.3M31.849.633.616.334.343.231.2M34.054.036.731.251.032.7
PVT-Tiny23.0M36.756.938.922.638.850.032.9M36.759.239.335.156.737.3
CycleMLP-B124.9M38.158.740.121.941.950.434.8M39.861.743.337.058.839.7
ResNet5037.7M36.355.338.619.340.048.844.2M38.058.641.434.455.136.7
PVT-Small34.2M40.461.343.025.042.955.744.1M40.462.943.837.860.140.3
CycleMLP-B236.5M40.661.443.222.944.454.546.5M42.164.045.738.961.241.8
ResNet10156.7M38.557.841.221.442.651.163.2M40.461.144.236.457.738.8
ResNeXt101-32x4d56.4M39.959.642.722.344.252.562.8M41.962.545.937.559.440.2
PVT-Medium53.9M41.963.144.325.044.957.663.9M42.064.445.639.061.642.1
CycleMLP-B348.1M42.563.245.325.245.556.258.0M43.465.047.739.562.042.4
PVT-Large71.1M42.663.745.425.846.058.481.0M42.965.046.639.561.942.5
CycleMLP-B461.5M43.263.946.226.646.557.471.5M44.165.748.140.262.743.5
ResNeXt101-64x4d95.5M41.060.944.023.945.254.0101.9M42.863.847.338.460.641.3
CycleMLP-B585.9M42.763.345.324.146.357.495.3M44.165.548.440.162.843.0
+ +Table 6: Object detection and instance segmentation on COCO va12017 (Lin et al., 2014). We compare CycleMLP with various backbones including ResNet (He et al., 2016), ResNeXt (Xie et al., 2017) and PVT (Wang et al., 2021b). + +
BackboneAPbAPb50APb75APmAPm50APm75ParamsFLOPs
ResNet50 (He et al., 2016)41.061.744.937.158.440.144M260G
PVT-Small (Wang et al., 2021b)43.065.346.939.962.542.844M245G
Swin-T (Liu et al., 2021b)46.068.250.241.665.144.848M264G
CycleMLP-T (ours)46.468.151.141.864.945.148M260G
+ +Results. Firstly, as shown in Table 6, CycleMLP-based RetinaNet consistently surpasses the CNN-based ResNet (He et al., 2016), ResNeXt (Xie et al., 2017) and Transformer-based PVT (Wang et al., 2021b) under similar parameter constraints, indicating that CycleMLP can serve as an excellent general-purpose backbone. Furthermore, using Mask R-CNN (He et al., 2017) for instance segmentation also demonstrates similar comparison results. Furthermore, from Table 7, the CycleMLP can achieve a slightly better performance than Swin Transformer. + +# 3.4 SEMANTIC SEGMENTATION + +Settings. We conduct semantic segmentation experiments on ADE20K (Zhou et al., 2017) dataset and present the detailed settings in Appendix. E.3. Table 8 and Table 9 show the experimental results using training recipes from PVT and Swin respectively. + +Table 7: The instance segmentation results of different backbones on the COCO val2017 dataset. Mask R-CNN frameworks are employed. + +
BackboneSemantic FPN
ParammIoU (%)
ResNet18 (He et al., 2016)15.5M32.9
PVT-Tiny (Wang et al., 2021b)17.0M35.7
CycleMLP-B1 (ours)18.9M40.8
ResNet50 (He et al., 2016)28.5M36.7
PVT-Small (Wang et al., 2021b)28.2M39.8
Swin-T† (Liu et al., 2021b)31.9M41.5
GFNet-Tiny (Rao et al., 2021)26.6M41.0
CycleMLP-B2 (ours)30.6M43.4
ResNet101 (He et al., 2016)47.5M38.8
ResNeXt101-32x4d (Xie et al., 2017)47.1M39.7
PVT-Medium (Wang et al., 2021b)48.0M41.6
GFNet-Small (Rao et al., 2021)47.5M42.5
CycleMLP-B3 (ours)42.1M44.3
PVT-Large (Wang et al., 2021b)65.1M42.1
Swin-S† (Liu et al., 2021b)53.2M45.2
CycleMLP-B4 (ours)55.6M45.1
GFNet-Base (Rao et al., 2021)74.7M44.8
ResNeXt101-64x4d (Xie et al., 2017)86.4M40.2
CycleMLP-B5 (ours)79.4M45.5
+ +Table 8: Semantic segmentation on ADE20K (Zhou et al., 2017) val. All models are equipped with Semantic FPN (Kirillov et al., 2019). $\dagger$ Results are from GFNet (Rao et al., 2021). + +![](images/342e07fe8b5a0d49df38290d96fa60f205c67ee050ff6c59e6f2905b90ddf66b.jpg) +Figure 4: Effective Receptive Field (ERF). We visualize the ERFs of the last stage for both Swin (Liu et al., 2021b) and CycleMLP. Best viewed with zoom in. + +Results. As shown in Table 8, CycleMLP outperforms ResNet (He et al., 2016) and PVT (Wang et al., 2021b) significantly with similar parameters. Moreover, compared to the state-of-the-art Transformer-based backbone, Swin Transformer (Liu et al., 2021b), CycleMLP can obtain comparable or even better performance. Specifically, CycleMLP-B2 surpasses Swin-T by $0.9\mathrm{mIoU}$ with slightly less parameters (30.6M v.s. 31.9M). + +Although GFNet (Rao et al., 2021) achieves similar performance as CycleMLP on ImageNet classification, CycleMLP notably outperforms GFNet on ADE20K semantic segmentation where input scale varies. We attribute the superiority of CycleMLP under a scale-variable scenario to the capability of dealing with arbitrary scales. On the contrary, GFNet (Rao et al., 2021) requires additional + +
MethodBackboneval MS mIoUParamsFLOPs
UperNet (Xiao et al., 2018)Swin-T (Liu et al., 2021b)45.860M945G
AS-MLP-T (Lian et al., 2021)46.560M937G
CycleMLP-T (ours)47.160M937G
UperNet (Xiao et al., 2018)Swin-S (Liu et al., 2021b)49.581M1038G
AS-MLP-S (Lian et al., 2021)49.281M1024G
CycleMLP-S (ours)49.681M1024G
UperNet (Xiao et al., 2018)Swin-B (Liu et al., 2021b)49.7121M1188G
AS-MLP-B(Lian et al., 2021)49.5121M1166G
CycleMLP-B (ours)49.7121M1166G
+ +heuristic operation (weight interpolation) when the input scale varies, which may hurt the performance. + +Moreover, we also visualized the receptive field following (Xie et al., 2021), and the results are visualized in Figure 4, which demonstrate that our CycleMLP has a larger effective receptive field than Swin. + +# 3.5 ROBUSTNESS + +Table 9: The semantic segmentation results of different backbones on the ADE20K validation set. + +
NetworkmCE↓NoiseBlurWeatherDigital
GaussShotImpulseDefocusGlassMotionZoomSnowFrostFogBrightContrastElasticPixelJPEG
ResNet-5076.779.881.682.674.788.678.079.977.874.866.156.671.484.776.976.8
DeiT-S54.646.347.746.461.671.957.971.949.946.246.044.942.366.659.160.4
Swin-S62.052.253.753.667.978.664.175.355.852.851.348.145.175.776.379.1
MLP-Mixer78.880.982.684.286.992.179.193.678.367.464.659.557.190.572.792.2
ResMLP-1266.057.658.257.872.683.267.976.561.457.863.853.952.178.372.975.3
gMLP-S64.052.153.252.573.177.664.679.977.778.854.355.343.670.658.667.5
CycleMLP-S53.742.143.443.261.576.756.066.451.547.250.841.239.572.357.556.1
+ +Table 10: Robustness on ImageNet-C (Hendrycks & Dietterich, 2019). The mean corruption error (mCE) normalized by AlexNet (Krizhevsky et al., 2012) errors is used as the robustness metric. The lower, the better. + +We further conduct experiments on ImageNet-C (Hendrycks & Gimpel, 2016) to analyze the robustness ability of the CycleMLP, following (Mao et al., 2021) and results are presented in Table 10. Compared with both Transformers (e.g. DeiT and Swin) and existing MLP models (e.g. MLP-Mixer, ResMLP, gMLP), CycleMLP achieves a stronger robustness ability. + +# 4 CONCLUSION + +We present a versatile MLP-like architecture, CycleMLP, in this work. CycleMLP is built upon the Cycle Fully-Connected Layer (Cycle FC), which is capable of dealing with variable input scales and can serve as a generic, plug-and-play replacement of vanilla FC layers. Experimental results demonstrate that CycleMLP outperforms existing MLP-like models on ImageNet classification and achieves promising performance on dense prediction tasks, i.e., object detection, instance segmentation and semantic segmentation. This work indicates that an attention-free architecture can also serve as a general vision backbone. + +Acknowledgment. Ping Luo is supported by the General Research Fund of HK No.27208720, No.17212120, and the HKU-TCL Joint Research Center for Artificial Intelligence. + +# REFERENCES + +Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, and Cordelia Schmid. Vivit: A video vision transformer. arXiv preprint arXiv:2103.15691, 2021. +Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. +Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? arXiv preprint arXiv:2102.05095, 2021. +Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. +Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In European Conference on Computer Vision, pp. 213-229. Springer, 2020. +Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qijie Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli Ouyang, Chen Change Loy, and Dahua Lin. MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019. +Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pp. 801-818, 2018. +MMSegmentation Contributors. MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark. https://github.com/open-mmlab/mmsegmentation, 2020. +James W Cooley and John W Tukey. An algorithm for the machine calculation of complex fourier series. Mathematics of computation, 19(90):297-301, 1965. +Jean-Baptiste Cordonnier, Andreas Loukas, and Martin Jaggi. On the relationship between self-attention and convolutional layers. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HJlnC1rKPB. +Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213-3223, 2016. +Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702-703, 2020. +Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. +Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. + +Maksim Dzabraev, Maksim Kalashnikov, Stepan Komkov, and Aleksandr Petiushko. Mdmt: Multidomain multimodal transformer for video retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3354-3363, 2021. +Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer. Multiscale vision transformers. arXiv preprint arXiv:2104.11227, 2021. +Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang, and Hanqing Lu. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3146-3154, 2019a. +Jun Fu, Jing Liu, Yuhang Wang, Yong Li, Yongjun Bao, Jinhui Tang, and Hanqing Lu. Adaptive context network for scene parsing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6748-6757, 2019b. +Valentin Gabeur, Chen Sun, Karteek Alahari, and Cordelia Schmid. Multi-modal transformer for video retrieval. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part IV 16, pp. 214-229. Springer, 2020. +Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256. JMLR Workshop and Conference Proceedings, 2010. +Meng-Hao Guo, Zheng-Ning Liu, Tai-Jiang Mu, and Shi-Min Hu. Beyond self-attention: External attention using two linear layers for visual tasks. arXiv preprint arXiv:2105.02358, 2021. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 2961-2969, 2017. +Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. Proceedings of the International Conference on Learning Representations, 2019. +Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. +Qibin Hou, Zihang Jiang, Li Yuan, Ming-Ming Cheng, Shuicheng Yan, and Jiashi Feng. Vision permutator: A permutable mlp-like architecture for visual recognition. arXiv preprint arXiv:2106.12368, 2021. +Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. +Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In European conference on computer vision, pp. 646-661. Springer, 2016. +Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. +Zilong Huang, Xinggang Wang, Lichao Huang, Chang Huang, Yunchao Wei, and Wenyu Liu. Ccnet: Criss-cross attention for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 603-612, 2019. +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448-456. PMLR, 2015. + +Alexander Kirillov, Ross Girshick, Kaiming He, and Piotr Dólar. Panoptic feature pyramid networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6399-6408, 2019. +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25:1097-1105, 2012. +Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541-551, 1989. +Changlin Li, Tao Tang, Guangrun Wang, Jiefeng Peng, Bing Wang, Xiaodan Liang, and Xiaojun Chang. Bossnas: Exploring hybrid cnn-transformers with block-wisely self-supervised neural architecture search. arXiv preprint arXiv:2103.12424, 2021. +Dongze Lian, Zehao Yu, Xing Sun, and Shenghua Gao. As-mlp: An axial shifted mlp architecture for vision. arXiv preprint arXiv:2107.08391, 2021. +Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013. +Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740-755. Springer, 2014. +Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólar. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980-2988, 2017. +Hanxiao Liu, Zihang Dai, David R So, and Quoc V Le. Pay attention to mlps. arXiv preprint arXiv:2105.08050, 2021a. +Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. arXiv preprint arXiv:2103.14030, 2021b. +Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. +Xiaofeng Mao, Gege Qi, Yuefeng Chen, Xiaodan Li, Shaokai Ye, Yuan He, and Hui Xue. Rethinking the design principles of robust vision transformer. arXiv preprint arXiv:2105.07926, 2021. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 8024-8035. Curran Associates, Inc., 2019. +Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021. +Yongming Rao, Wenliang Zhao, Zheng Zhu, Jiwen Lu, and Jie Zhou. Global filter networks for image classification. arXiv preprint arXiv:2107.00645, 2021. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. +Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, and Ashish Vaswani. Bottleneck transformers for visual recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16519-16529, 2021. + +Peize Sun, Rufeng Zhang, Yi Jiang, Tao Kong, Chenfeng Xu, Wei Zhan, Masayoshi Tomizuka, Lei Li, Zehuan Yuan, Changhu Wang, et al. Sparse r-cnn: End-to-end object detection with learnable proposals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14454-14463, 2021. +Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015. +Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818-2826, 2016. +Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, et al. Mlp-mixer: An all-mlp architecture for vision. arXiv preprint arXiv:2105.01601, 2021. +Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. arXiv preprint arXiv:2012.12877, 2020. +Hugo Touvron, Piotr Bojanowski, Mathilde Caron, Matthieu Cord, Alaaeldin El-Nouby, Edouard Grave, Armand Joulin, Gabriel Synnaeve, Jakob Verbeek, and Hervé Jégou. Resmlp: Feedforward networks for image classification with data-efficient training. arXiv preprint arXiv:2105.03404, 2021a. +Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, and Hervé Jégou. Going deeper with image transformers. arXiv preprint arXiv:2103.17239, 2021b. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018. +Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pvtv2: Improved baselines with pyramid vision transformer. arXiv preprint arXiv:2106.13797, 2021a. +Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. arXiv preprint arXiv:2102.12122, 2021b. +Yuqing Wang, Zhaoliang Xu, Xinlong Wang, Chunhua Shen, Baoshan Cheng, Hao Shen, and Huaxia Xia. End-to-end video instance segmentation with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8741-8750, 2021c. +Ross Wightman. Pytorch image models. https://github.com/rwrightman/pytorch-image-models, 2019. +Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, and Lei Zhang. Cvt: Introducing convolutions to vision transformers. arXiv preprint arXiv:2103.15808, 2021. +Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 418-434, 2018. +Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. arXiv preprint arXiv:2105.15203, 2021. + +Saining Xie, Ross Girshick, Piotr Dolkar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1492-1500, 2017. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32, 2019. +Minghao Yin, Zhuliang Yao, Yue Cao, Xiu Li, Zheng Zhang, Stephen Lin, and Han Hu. Disentangled non-local neural networks. In European Conference on Computer Vision, pp. 191-207. Springer, 2020. +Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016. +Tan Yu, Xu Li, Yunfeng Cai, Mingming Sun, and Ping Li. $S^2$ -mlp: Spatial-shift mlp architecture for vision. arXiv preprint arXiv:2106.07477, 2021. +Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zihang Jiang, Francis EH Tay, Jiashi Feng, and Shuicheng Yan. Tokens-to-token vit: Training vision transformers from scratch onImagenet. arXiv preprint arXiv:2101.11986, 2021. +Yuhui Yuan, Xilin Chen, and Jingdong Wang. Object-contextual representations for semantic segmentation. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VI 16, pp. 173-190. Springer, 2020. +Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6023-6032, 2019. +Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. +Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip HS Torr, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6881-6890, 2021. +Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020. +Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 633-641, 2017. +Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020. + +# A LITERATURE ON VISION MODEL + +CNN-based Models. Originally introduced over twenty years ago (LeCun et al., 1989), convolutional neural networks (CNN) have been widely adopted since the success of the AlexNet (Krizhevsky et al., 2012) which outperformed prevailing approaches based on hand-crafted image features. There have been several attempts made to improve the design of CNN-based models. VGG (Simonyan & Zisserman, 2014) demonstrated a state-of-the-art performance on ImageNet via deploying small $(3\times 3)$ convolution kernels to all layers. He et al. introduced skip-connections in ResNets (He et al., 2016), enabling a model variant with more than 1000 layers. DenseNet (Huang et al., 2017) connected each layer to every other layer in a feed-forward fashion, strengthening feature propagation and reducing the number of parameters. In parallel with these architecture design works, some other works also made significant contributions to the popularity of CNNs, including normalization (Ioffe & Szegedy, 2015; Ba et al., 2016), data augmentation (Cubuk et al., 2020; Yun et al., 2019; Zhang et al., 2017), etc. + +Transformer-based Models. Transformers were first proposed by Vaswani et al.for machine translation and have since become the dominant choice in many NLP tasks (Devlin et al., 2018; Wang et al., 2018; Yang et al., 2019; Brown et al., 2020). Recently, transformer have also led to a series of breakthroughs in computer vision community since the invention of ViT (Dosovitskiy et al., 2020), and have been working as a de facto standard for various tasks, e.g., image classification (Dosovitskiy et al., 2020; Touvron et al., 2020; Yuan et al., 2021), detection and segmentation (Wang et al., 2021b; Liu et al., 2021b; Zheng et al., 2021; Xie et al., 2021), video recognition (Wang et al., 2021c; Bertasius et al., 2021; Arnab et al., 2021; Fan et al., 2021) and so on. Moreover, there has also been lots of interest in adopting transformer to cross aggregate multiple modality information (Radford et al., 2021; Gabeur et al., 2020; Dzabraev et al., 2021). Furthermore, combining CNNs and transformers is also explored in (Srinivas et al., 2021; Li et al., 2021; Wu et al., 2021; Touvron et al., 2021b). + +MLP-based Models. MLP-based models (Tolstikhin et al., 2021; Touvron et al., 2021a; Liu et al., 2021a) differ from the above discussed CNN- and Transformer-based models because they resort to neither convolution nor self-attention layers. Instead, they use MLP layers over feature patches on spatial dimensions to aggregate the spatial context. These MLP-based models share similar macro structures but differ from each other in the detailed design of the micro block. In addition, MLP-based models provide more efficient computation than transformer-based models since they do not need to calculate affinity matrix using key-query multiplication. Concurrent to our work, $S^2$ -MLP (Yu et al., 2021) utilizes a spatial-shift operation for spatial information communication. The similar aspect between our work and $S^2$ -MLP lies in that we all conduct MLP operations along the channel dimension. However, our Cycle FC is different from $S^2$ -MLP in: (1) $S^2$ -MLP achieves communications between patches by splitting feature maps along channel dimension into several groups and shifting different groups in different directions. It introduces extra splitting and shifting operations on the feature map. On the contrary, we propose a novel operator-Cycle Fully-Connected Layer-for spatial context aggregation. It does not modify the feature map and is formulated as a generic, plug-and-play MLP unit that can be used as a direct replacement of vanilla without any adjustments. (2) We design a pyramid structure for and conduct extensive experiments on classification, object detection, instance segmentation, and semantic segmentation. However, the output feature map of $S^2$ -MLP has only one single scale in low resolution, which is unsuitable for dense prediction tasks. Only ImageNet classification is evaluated on $S^2$ -MLP. We compared Cycle FC with $S^2$ -MLP in details in the Section 3. + +# B COMPARISON OF MLP BLOCKS + +We summary MLP blocks proposed by recent MLP-related works in Figure 5. We notice that existing MLP blocks, i.e., MLP-Mixer, ResMLP and gMLP share similar method of Spatial Proj: Transpose $\rightarrow$ Fully-Connected over spatial dimension $\rightarrow$ Transpose back. These models can not cope with variable image scales as the FC layers in Spatial Proj are configured by the seq_len. + +The blocks used for building CycleMLP consist of our proposed novel Cycle FC, whose configuration has nothing to do with image scales and can naturally deal with dynamic image scales. + +![](images/a4b219bcab89208f9bb21404601822351696513a4888be1367ea57d892b83ca6.jpg) +(a) MLP-Mixer (Tolstikhin et al., 2021) + +![](images/1df315bf0b1179968a4e654ad9f038e95c14fb77c34794bbf3806fc9357188f3.jpg) +(b) ResMLP (Touvron et al., 2021a) + +![](images/f2ec6c362b1c51291abb147587d9e1d346afd6d2f2e656814ca709ae1f18b456.jpg) +(c) gMLP (Liu et al., 2021a) +Figure 5: Comparison of MLP blocks in details. + +![](images/f1c2a3add742f957e1307df85117b2807a15c132614be1d4d7016b05e277b5c4.jpg) +(d) CycleMLP (Ours) + +# C FROM MULTI-HEAD SELF-ATTENTION TO CONVOLUTION + +In this section, we provide details in how MHSA can be transferred into a convolution-like operator in equation 3. To start with, the a MHSA layer can be formulated as below: + +$$ +\operatorname {M H S A} (\boldsymbol {X}) = \underset {h \in \{1, \dots , N _ {h} \}} {\operatorname {c o n c a t}} \left[ \operatorname {S A} _ {h} (\boldsymbol {X}) \right] \boldsymbol {W} ^ {\text {o u t}} + \boldsymbol {b} \tag {5} +$$ + +where $W^{out} \in \mathbb{R}^{(N_hC_{out}) \times C_{out}'}$ and $b \in \mathbb{R}^{C_{out}'}$ are parameters for the final linear projection. $\mathrm{SA}_h$ is the $h^{th}$ self-attention module. Then we reshape $X$ into $X \in \mathbb{R}^{HW \times C_{in}}$ and let $T = H \times W$ which indicates that there are $T$ tokens in $X$ . $\mathrm{SA}_h$ can be defined as follow: + +$$ +\operatorname {S A} (\boldsymbol {X}) _ {t,:} = \operatorname {s o f t m a x} (\boldsymbol {A} _ {t,:}) \boldsymbol {V} + \boldsymbol {b} +$$ + +$$ +\boldsymbol {A} = (\boldsymbol {Q} + \boldsymbol {P}) (\boldsymbol {K} + \boldsymbol {P}) ^ {\intercal} \tag {6} +$$ + +where $\mathbf{V} = \mathbf{X}\mathbf{W}^{\mathrm{val}}$ , $\mathbf{Q} = \mathbf{X}\mathbf{W}^{\mathrm{qry}}$ , $\mathbf{K} = \mathbf{X}\mathbf{W}^{\mathrm{key}}$ are respectively the value, query and key matrix with learnable matrices $\mathbf{W}^{v} \in \mathbb{R}^{C_{in} \times C_{out}}$ , $\mathbf{W}^{q} \in \mathbb{R}^{C_{in} \times C_{k}}$ , $\mathbf{W}^{k} \in \mathbb{R}^{C_{in} \times C_{k}}$ . $\mathbf{P} \in \mathbb{R}^{T \times C_{in}}$ is the + +positional embedding matrix containing positional information for every input token, which can be replaced by the output of any function $f_{P}$ that encodes the position of tokens. And $A \in \mathbb{R}^{T \times T}$ is the attention matrix where each element $A_{i,j}$ is the attention score between the $i^{th}$ and $j^{th}$ token in $X$ . With absolute positional encoding, the second line in equation 6 can be expanded as (Cordonnier et al., 2020): + +$$ +\begin{array}{l} \boldsymbol {A} _ {q, k} = \left(\boldsymbol {X} _ {q,:} + \boldsymbol {P} _ {q,:}\right) \boldsymbol {W} ^ {\text {q y}} \left(\left(\boldsymbol {X} _ {k,:} + \boldsymbol {P} _ {k,:}\right) \boldsymbol {W} ^ {\text {k e y}}\right) ^ {\intercal} \\ = \boldsymbol {X} _ {q,:} \boldsymbol {W} ^ {\text {q r y}} \left(\boldsymbol {X} _ {k,:} \boldsymbol {W} ^ {\text {k e y}}\right) ^ {\intercal} + \boldsymbol {X} _ {q,:} \boldsymbol {W} ^ {\text {q r y}} \left(\boldsymbol {P} _ {k,:} \boldsymbol {W} ^ {\text {k e y}}\right) ^ {\intercal} + \boldsymbol {P} _ {q,:} \boldsymbol {W} ^ {\text {q r y}} \left(\boldsymbol {X} _ {k,:} \boldsymbol {W} ^ {\text {k e y}}\right) ^ {\intercal} + \boldsymbol {P} _ {q,:} \boldsymbol {W} ^ {\text {q r y}} \left(\boldsymbol {P} _ {k,:} \boldsymbol {W} ^ {\text {k e y}}\right) ^ {\intercal} \tag {7} \\ \end{array} +$$ + +When we apply relative positional encoding scheme in (Dai et al., 2019), $A$ is re-parametried into: + +$$ +\boldsymbol {A} _ {q, k} = \boldsymbol {X} _ {q,:} \boldsymbol {W} ^ {\text {q r y}} \left(\boldsymbol {X} _ {k,:} \boldsymbol {W} ^ {\text {k e y}}\right) ^ {\intercal} + \boldsymbol {X} _ {q,:} \boldsymbol {W} ^ {\text {q r y}} \left(\boldsymbol {r} _ {\delta_ {q, k}} \hat {\boldsymbol {W}} ^ {\text {k e y}}\right) ^ {\intercal} + \boldsymbol {u} \left(\boldsymbol {X} _ {k,:} \boldsymbol {W} ^ {\text {k e y}}\right) ^ {\intercal} + \boldsymbol {v} \left(\boldsymbol {r} _ {\delta_ {q, k}} \hat {\boldsymbol {W}} ^ {\text {k e y}}\right) ^ {\intercal} \tag {8} +$$ + +where $\pmb{r}_{\delta_{q,k}}$ is a positional encoding for relative distance $\delta_{q,k} = (\delta_1,\delta_2)$ between token $q$ and $k$ in $\pmb{X}$ . $\hat{\pmb{W}}^{key}$ is introduced to only pertain to the positional encoding $\pmb{r}_{q,k}$ . $\pmb{u}$ and $\pmb{v}$ are learnable parameter vectors that replace the original $\pmb{P}_{q,\cdot}\pmb{W}^{\mathrm{qry}}$ term, which implies that the attention bias remains the same regardless of the absolution positions of the query. If we set $\pmb{W}^{qry} = \pmb{W}^{key} = 0$ and $\hat{\pmb{W}}^{key} = \pmb{I}$ , the first three terms in equation 8 vanish and $\pmb{A}_{q,k} = \pmb{v}\pmb{r}_{q,k}^{\top}$ . We set $\{\Delta_i(h),\Delta_j(h)\} = \{(0,0),(1,0),(-1,0),\dots \}$ contains all possible positional shift in convolution with kernel size $\sqrt{N_h}\times \sqrt{N_h}$ . For each head $h$ , let $\pmb{r}_{q,k} = (\| \delta_{q,k}\| ^2,\delta_1,\delta_2)$ and $v^{h} = -\alpha^{h}(1, - 2\Delta_{i}(h), - 2\Delta_{j}(h))$ , each softmax attention matrix becomes: + +$$ +\operatorname {s o f t m a x} \left(\boldsymbol {A} ^ {h}\right) _ {q, k} = \left\{ \begin{array}{l l} 1 & i f \delta_ {q, k} = \left(\Delta_ {i} (h), \Delta_ {j} (h)\right) \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {9} +$$ + +Substitute $\mathrm{softmax}(\pmb{A}^h)$ into equation 5 and we get + +$$ +\operatorname {M H S A} (\boldsymbol {X}) _ {i, j,:} = \sum_ {h \in \{1, 2, \dots , N _ {h} \}} \boldsymbol {X} _ {i + \Delta_ {i} (h), j + \Delta_ {j} (h),:} \boldsymbol {W} ^ {\mathrm {m h s a}, h} + \boldsymbol {b} \tag {10} +$$ + +# D ARCHITECTURE VARIANTS + +In order to conduct fair and convenient comparison, we build two model zoos: the one is in PVT-Style (named as CycleMLP-B1 to -B5) and the other in Swin-Style (named as CycleMLP-T, -S and -B). These models are scaled up by adapting several architecture-related hyper-parameters, including $S_{i}$ , $C_{i}$ , $E_{i}$ and $L_{i}$ which represent the stride of the transition, the token channel dimension, the number of blocks and the expansion ratio respectively at Stage $i$ . Detailed configurations of these models are in Table 11. + +# EXPERIMENTAL SETUPS + +# E.1 IMAGENET CLASSIFICATION + +Settings. We train our models on the ImageNet-1K dataset (Deng et al., 2009), which contains 1.2M training images and 50K validation images evenly spreading 1,000 categories. We follow the standard practice in the community by reporting the top-1 accuracy on the validation set. Our code is implemented based on PyTorch (Paszke et al., 2019) framework and heavily relies on the timm (Wightman, 2019) repository. For apple-to/apple comparison, our training strategy is mostly adopted from DeiT (Touvron et al., 2020), which includes RandAugment (Cubuk et al., 2020), Mixup (Zhang et al., 2017), Cutmix (Yun et al., 2019) random erasing (Zhong et al., 2020) and stochastic depth (Huang et al., 2016). The optimizer is AdamW (Loshchilov & Hutter, 2017) with the momentum of 0.9 and weight decay of $5 \times 10^{-2}$ by default. The cosine learning rate schedule is adopted with the initial value of $1 \times 10^{-3}$ . All models are trained for 300 epochs on 8 Tesla V100 GPUs with a total batch size of 1024. + +
Output SizeLayer NamePVT-Style (Wang et al., 2021b)Swin-Style (Liu et al., 2021b)
B1B2B3B4B5TinySmallBase
Stage 1H/4 × W/4Overlapping Patch EmbeddingS1 = 4S1 = 4
C1 = 64C1 = 96C1 = 96C1 = 128
CycleMLP BlockE1 = 4E1 = 4E1 = 8E1 = 8E1 = 4E1 = 4E1 = 4E1 = 4
L1 = 2L1 = 2L1 = 3L1 = 3L1 = 3L1 = 2L1 = 2L1 = 2
Stage 2H/8 × W/8Overlapping Patch EmbeddingS2 = 2S2 = 2
C2 = 128C2 = 192C2 = 192C2 = 256
CycleMLP BlockE2 = 4E2 = 4E2 = 8E2 = 8E2 = 4E1 = 4E1 = 4E1 = 4
L2 = 2L2 = 3L2 = 4L2 = 8L2 = 4L1 = 2L1 = 2L1 = 2
Stage 3H/16 × W/16Overlapping Patch EmbeddingS3 = 2S3 = 2
C3 = 320C3 = 384C3 = 384C3 = 512
CycleMLP BlockE3 = 4E3 = 4E3 = 4E3 = 4E3 = 4E1 = 4E1 = 4E1 = 4
L3 = 4L3 = 10L3 = 18L3 = 27L3 = 24L1 = 6L1 = 18L1 = 18
Stage 4H/32 × W/32Overlapping Patch EmbeddingS4 = 2S4 = 2
C4 = 512C4 = 768C4 = 768C4 = 1024
CycleMLP BlockE4 = 4E4 = 4E4 = 4E4 = 4E4 = 4E1 = 4E1 = 4E1 = 4
L4 = 2L4 = 3L4 = 3L4 = 3L4 = 3L1 = 2L1 = 2L1 = 2
Parameters (M)15.226.838.451.875.728.349.687.7
FLOPs (G)2.13.96.910.112.34.48.615.2
+ +Table 11: Instantiations of the CycleMLP with varying complexity. The $E_{i}$ and $L_{i}$ denote the expand ratio and number of repeated layers. Our design principle is inspired by the philosophy of ResNet (He et al., 2016), where the channel dimension increases while the spatial resolution shrinks with the layer going deeper. + +Further kernel optimization for Cycle FC may bring a faster speed but is beyond the scope of this work. + +# E.2 COCO INSTANCE SEGMENTATION + +We conduct object detection and instance segmentation experiments on COCO (Lin et al., 2014) dataset, which contains 118K and 5K images for train and validation splits. We adopt the mmdetection (Chen et al., 2019) toolbox for all experiments in this subsection. To evaluate the our CycleMLP backbones, we adopt two widely used detectors, i.e., RetinaNet (Lin et al., 2017) and Mask R-CNN (He et al., 2017). All backbones are initialized with ImageNet pre-trained weights and other newly added layers are initialized via Xavier (Glorot & Bengio, 2010). We use the AdamW (Loshchilov & Hutter, 2017) optimizer with the initial learning rate of $1 \times 10^{-4}$ . All models are trained on 8 Tesla V100 GPUs with a total batch size of 16 for 12 epochs (i.e., $1 \times$ training scheduler). The input images are resized to the shorted side of 800 pixels and the longer side does not exceed 1333 pixels during training. We do not use the multi-scale (Carion et al., 2020; Zhu et al., 2020; Sun et al., 2021) training strategy. In the testing stage, the shorter side of input images is resized to 800 pixels while no constraint on the longer side. + +# E.3 ADE20K SEMANTIC SEGMENTATION + +We conduct semantic segmentation experiments on ADE20K (Zhou et al., 2017) dataset, which covers a broad range of 150 semantic categories. ADE20K contains 20K training, 2K validation and 3K testing images. We adopt the mmsegmentation (Contributors, 2020) toolbox as our codebase in this subsection. The experimental settings mostly follow PVT (Wang et al., 2021b), which trains models for 40K iterations on 8 Tesla V100 GPUs with 4 samples per GPU. The backbone is initialized with the pre-trained weights on ImageNet. All models are optimized by AdamW (Loshchilov & Hutter, 2017). The initial learning rate is configured as $2 \times 10^{-4}$ with the polynomial decay parameter of 0.9. Input images are randomly resized and cropped to $512 \times 512$ at the training phase. During testing, we scale the images to the shorted side of 512. We adopt the simple approach Semantic FPN (Kirillov et al., 2019) as the semantic segmentation method following (Wang et al., 2021b) for fair comparison. + +
MethodBackboneval MS mIoUParamsFLOPs
DANet (Fu et al., 2019a)ResNet-10145.269M1119G
DeepLabv3+ (Chen et al., 2018)ResNet-10144.163M1021G
ACNet (Fu et al., 2019b)ResNet-10145.9--
DNL (Yin et al., 2020)ResNet-10146.069M1249G
OCRNet (Yuan et al., 2020)ResNet-10145.356M923G
UperNet (Xiao et al., 2018)ResNet-10144.986M1029G
OCRNet (Yuan et al., 2020)HRNet-w4845.771M664G
DeepLabv3+ (Chen et al., 2018)ResNeSt-10146.966M1051G
DeepLabv3+ (Chen et al., 2018)ResNeSt-20048.488M1381G
UperNet (Xiao et al., 2018)Swin-T (Liu et al., 2021b)45.860M945G
AS-MLP-T (Lian et al., 2021)46.560M937G
CycleMLP-T (ours)47.160M937G
UperNet (Xiao et al., 2018)Swin-S (Liu et al., 2021b)49.581M1038G
AS-MLP-S (Lian et al., 2021)49.281M1024G
CycleMLP-S (ours)49.681M1024G
UperNet (Xiao et al., 2018)Swin-B (Liu et al., 2021b)49.7121M1188G
AS-MLP-B(Lian et al., 2021)49.5121M1166G
CycleMLP-B (ours)49.7121M1166G
+ +Table 12: The semantic segmentation results of different backbones on the ADE20K validation set. + +# F SAMPLING STRATEGIES + +We explore more sampling strategies in this subsection, including random sampling and dilated sampling inspired by dilated convolution (Yu & Koltun, 2016; Chen et al., 2018) (as shown in Figure 6). We also compare the dense sampling method with ours. + +Random sampling. As shown in Table 13, we conduct experiments with random sampling for three independent trials and observe that the averaged Top-1 accuracy on ImageNet-1K drops by $1.3\%$ . We hypothesize that the decreased performance is caused by the fact that random sampling will totally disturb the semantic information of objects, which is essential to image recognition. Compared with the random sampling strategy, our cyclical sampling is able to aggregate the adjacent pixels, which benefits in capturing the semantic information. + +Dilated Stepsize (Figure 6). As shown in Table 13, we observe the result of dilated sampling is better than the random one $(+1.0\%)$ acc) but lower than ours $(-0.5\%)$ acc). In fact, compared with the random sampling, dilated solutions take their advantages in local information aggregation. However, compared with the cyclical sampling strategy, dilated solutions lose the fine-grained information for recognition. It may hurt the accuracy performance to some extent. + +Dense sampling. we conduct ablation studies by using dense sampling strategies (i.e., vanilla convolution with kernel size $1 \times 3$ and $3 \times 1$ ). Since dense sampling strategies incredibly increase the models' parameters and FLOPs, we do not have enough time to thoroughly optimize the model for 300 epochs. Therefore, for fair comparisons, we conducted extra ablation studies on training models for 100 epochs with the strictly same learning configurations. The results shown in Table 14 demonstrate that the sparse sampling strategy (ours) outperforms the dense one. The comparison indicates that the dense sampling strategies introduce redundant parameters, which makes the model hard to optimize. Our sparse sampling strategy with fewer parameters is proven to be efficient and optimization-friendly. + +![](images/fc25756b447e997c7aec25e409ab92871caab277e8328a3ba6aa6590b000482c.jpg) +Figure 6: An example of dilated CycleMLP where dilation=2 and stepsize=3. + +
SamplingParamsFLOPsTop-1 Acc
dilation=226.8M3.9G81.1
Random, S=180.4
Random, S=226.8M3.9G80.2
Random, S=380.4
CycleMLP26.8M3.9G81.6
+ +Table 13: Comparison with dilated and random sampling. For random sampling, we conduct the experiments for three independent trials with three seeds (S=1, 2, 3). + +
OperatorsDenseParamsFLOPsTop-1 Acc
Conv: 1×3 + 3×134.3M5.1G75.0
CycleMLP: 1×3 + 3×1X26.8M3.9G76.1
+ +Table 14: Comparison with dense sampling: On the consideration of training time, we only train both models for 100 epochs for fair comparison. + +
branch1branch2ImgNet Top-1ADE20K mIoU
7×11×781.643.9
7×22×781.543.4
7×33×781.442.7
4×44×481.543.1
+ +Table 15: Comparison on different stepsizes (e.g., even stepsize and odd stepsize), including $7 \times 2$ , $4 \times 4$ . + +# G VISUALIZATION EXAMPLES + +For easier understanding of our proposed CycleMLP, we visualize several instances of CycleMLP in Figure 7, including general case with stepsize $3 \times 3$ (7a), even stepsize (7b), and examples where stepsize along height or width equals to 1 (7c, 7d). + +We note that given specific number of input and output channels, no matter how the stepsize changes, the number of parameters of the CycleMLP does not change. Therefore, there is a trade-off of representation abilities between spatial and channel dimensions, which will be discussed in details in following experimental analysis. + +Experiments: We further conduct experiments on CycleMLPs with stepsize of $2 \times 7$ , $7 \times 2$ , $7 \times 3$ , $3 \times 7$ , and $4 \times 4$ , respectively. The results are summarized Table 15. For fair comparisons, all the models in the above table have the same parameters and FLOPs. We observe that the model with stepsize of $1 \times 7$ and $7 \times 1$ achieves the best performance, especially for semantic segmentation on ADE20K. To analyze the impact of stepsize on the performance, we take Figure 7 for better illustration. One can see that enlarging the stepsize can expand the spatial receptive field. However, at a cost, it will reduce the number of periods (groups) running along the channel dimension, which may hurt the channel-wise representation abilities. Taking a feature map with $C = 18$ for example, the CycleMLP with stepsize $3 \times 3$ (Figure 7(a)) runs through only 2 channel groups (curly brackets in the figure). However, the CycleMLP with stepsize $3 \times 1$ (Figure 7(c)) will run through 6 groups in total, making better use of the representation in the channel dimension. That's to say, there is a trade-off between spatial and channel representation. We empirically found that CyCleMLP with stepsize of $1 \times 7$ and $7 \times 1$ achieves the best performance. + +![](images/9ea1c9845e91054d2299e92eab859dea528d905c23239375ff4f78854e3ba5ae.jpg) +(a) $S_H\times S_W\colon 3\times 3$ + +![](images/672dfcfb874ac06205c4a9365851805c0bd9512c27f3247fc7233fb5a2b729b2.jpg) +(b) $S_H\times S_W\colon 3\times 2$ + +![](images/353a2fbedf4312658fb9177897d663ab7d5e227f80ce40f01b381dc8cd6732e0.jpg) +(c) $S_H\times S_W\colon 3\times 1$ + +![](images/7bcc5126782757dae69bca1e6a4631829b949175b149451a3d7624849316c590.jpg) +(d) $S_H\times S_W\colon 1\times 3$ +Figure 7: Examples of Stepsize cases: Here we separate the feature map along the width dimension for convenient visualization. $\star$ denotes the output position. We place the absolute coordinates (h, w, c) of the sampled points at the left of the feature. Sampled points within a curly bracket (\{}) belong to the same period (group). Dash lines link two cyclical periods. \ No newline at end of file diff --git a/cyclemlpamlplikearchitecturefordenseprediction/images.zip b/cyclemlpamlplikearchitecturefordenseprediction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5d7e8942fc9138810ae21d6e57a12f8099fe5c90 --- /dev/null +++ b/cyclemlpamlplikearchitecturefordenseprediction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a89e440442a795d72a9addf7c5737065d015fe3b66f30761521b8fb052af877c +size 1476279 diff --git a/cyclemlpamlplikearchitecturefordenseprediction/layout.json b/cyclemlpamlplikearchitecturefordenseprediction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..517fda800a11973f376897e10cd4a7e7ad2594ff --- /dev/null +++ b/cyclemlpamlplikearchitecturefordenseprediction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6ff960af3e84ebda9bd5587fbb9150ecadc10c2433edcffa34b559ea741ae609 +size 631684 diff --git a/dataefficientgraphgrammarlearningformoleculargeneration/5a5ef2e5-48b4-4396-b833-84eefd9ccf4f_content_list.json b/dataefficientgraphgrammarlearningformoleculargeneration/5a5ef2e5-48b4-4396-b833-84eefd9ccf4f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..563f5a2c982b3b0bbb16449da66c21d7ddf65538 --- /dev/null +++ b/dataefficientgraphgrammarlearningformoleculargeneration/5a5ef2e5-48b4-4396-b833-84eefd9ccf4f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d14bd4ec53eed47af3265fdcea872c568172b9bb1d5acf1a4cd48f9f3c760914 +size 105681 diff --git a/dataefficientgraphgrammarlearningformoleculargeneration/5a5ef2e5-48b4-4396-b833-84eefd9ccf4f_model.json b/dataefficientgraphgrammarlearningformoleculargeneration/5a5ef2e5-48b4-4396-b833-84eefd9ccf4f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..06a195b42db919530b2d79320a5a9ee3f627ad84 --- /dev/null +++ b/dataefficientgraphgrammarlearningformoleculargeneration/5a5ef2e5-48b4-4396-b833-84eefd9ccf4f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b187a3c3aca77b232604d00efab5c83ab97a9f6fd15446bfa0d1acfe6c571bbc +size 131601 diff --git a/dataefficientgraphgrammarlearningformoleculargeneration/5a5ef2e5-48b4-4396-b833-84eefd9ccf4f_origin.pdf b/dataefficientgraphgrammarlearningformoleculargeneration/5a5ef2e5-48b4-4396-b833-84eefd9ccf4f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4c47bf76960dc44a874088887b3eafa2f974f45a --- /dev/null +++ b/dataefficientgraphgrammarlearningformoleculargeneration/5a5ef2e5-48b4-4396-b833-84eefd9ccf4f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aaaaa034624465fcae0542fbdc26da4094c446d4a30d9434cc60c0c6c25c9c73 +size 3561978 diff --git a/dataefficientgraphgrammarlearningformoleculargeneration/full.md b/dataefficientgraphgrammarlearningformoleculargeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..42ea2838ba4df9c91643e9a3e55d6feb44082262 --- /dev/null +++ b/dataefficientgraphgrammarlearningformoleculargeneration/full.md @@ -0,0 +1,427 @@ +# DATA-EFFICIENT GRAPH GRAMMAR LEARNING FOR MOLECULAR GENERATION + +Minghao Guo $^{1}$ , Veronika Thost $^{2,3}$ , Beichen Li $^{1}$ , Payel Das $^{2,3}$ , Jie Chen $^{2,3}$ , Wojciech Matusik $^{1}$ + +$^{1}$ MIT CSAIL, $^{2}$ MIT-IBM Watson AI Lab, $^{3}$ IBM Research + +# ABSTRACT + +The problem of molecular generation has received significant attention recently. Existing methods are typically based on deep neural networks and require training on large datasets with tens of thousands of samples. In practice, however, the size of class-specific chemical datasets is usually limited (e.g., dozens of samples) due to labor-intensive experimentation and data collection. This presents a considerable challenge for the deep learning generative models to comprehensively describe the molecular design space. Another major challenge is to generate only physically synthesizable molecules. This is a non-trivial task for neural network-based generative models since the relevant chemical knowledge can only be extracted and generalized from the limited training data. In this work, we propose a data-efficient generative model that can be learned from datasets with orders of magnitude smaller sizes than common benchmarks. At the heart of this method is a learnable graph grammar that generates molecules from a sequence of production rules. Without any human assistance, these production rules are automatically constructed from training data. Furthermore, additional chemical knowledge can be incorporated in the model by further grammar optimization. Our learned graph grammar yields state-of-the-art results on generating high-quality molecules for three monomer datasets that contain only $\sim 20$ samples each. Our approach also achieves remarkable performance in a challenging polymer generation task with only 117 training samples and is competitive against existing methods using 81k data points. Code is available at https://github.com/gmh14/data_efficient_brammer. + +# 1 INTRODUCTION + +The rise of computational approaches has started to have a significant impact on the discovery of materials and drugs. Recent advances in machine learning, especially deep learning (DL), have driven rapid development on generating novel molecular structures (Maziarka et al., 2020; Xu et al., 2019; Hoffman et al., 2022). Various forms of generative models including generative adversarial networks (De Cao & Kipf, 2018; Maziarka et al., 2020), variational autoencoders (VAEs) (Jin et al., 2018; 2020; Liu et al., 2018; Sattarov et al., 2019), and reinforcement learning (You et al., 2018) have been exploited to represent the complicated molecular design space and generate new molecules. They typically formulate molecular generation as a problem of distribution learning, where the generative model first learns to reproduce the distribution of a training set before generating new molecular structures. Generative models have also managed to integrate certain chemical constraints (e.g., valency restrictions) (Jin et al., 2018; Liu et al., 2018) and shown promising results on the common benchmarks (Irwin et al., 2012; Ramakrishnan et al., 2014). However, DL-based generative models face a serious limitation: they require large amounts of training data to achieve reasonable performance. + +In practice, molecule data is not always abundant (Stanley et al., 2021; Altae-Tran et al., 2017; Subramanian et al., 2016); for instance, the focus may be on a specific type of molecules fulfilling certain ingredient requirements. Particularly in the context of polymers, large amounts of training data are not available, and therefore DL models use manually constructed or generated data (St. John et al., 2019; Jin et al., 2020; Ma & Luo, 2020). Real data sets, as used in one of the state-of-the-art papers on polyurethane property prediction, have as little as 20 samples (Menon et al., 2019). In such scenarios, designing a pure DL-based model is challenging. + +![](images/5af57992b25ab7cbbd2dca07272bc82b388024733ddddce8d37af926d1769848.jpg) +Figure 1: Overview. Given molecules and domain-specific metrics to be optimized, we construct a graph grammar, which can serve as a generative model. The graph grammar construction process automatically learns the grammar rules by optimizing the metrics. + +Recent renewed interest in formal grammars (Kajino, 2019; Krenn et al., 2019; Nigam et al., 2021; Guo et al., 2021) provides an alternative to pure DL methods. In formal language theory, a grammar is a set of production rules describing how to generate valid strings according to the language's syntax. A chemical grammar may thus be considered as an interpretable and compact design model that simultaneously serves as a molecular representation and a generative model; even domain-specific constraints can be explicitly incorporated into the rules. Recent examples range from string-based (Krenn et al., 2019; Nigam et al., 2021) to hypergraph-based (Kajino, 2019) and polymer-specific grammars (Guo et al., 2021). Grammar-based generative models do not rely on large training datasets and easily extrapolate to generate molecules outside the distribution of training samples. Yet, they have two major drawbacks. First, current approaches require that chemical grammars are manually designed by human experts (Krenn et al., 2019; Nigam et al., 2021; Dai et al., 2018). This is a tedious process that heavily relies on expertise in chemistry. Moreover, the existing grammars are also very fine-grained (i.e., rules are mostly attaching atoms) in order to cover the syntax of general molecules instead of a specific dataset. Hence, it is not straightforward how to incorporate the bias of given data (e.g., certain molecular substructures) into such grammars. The second drawback is that it remains challenging to integrate chemistry knowledge beyond simple constraints (e.g., valency restrictions) into the grammar. Current grammar construction approaches fail to capture more abstract or complex aspects such as the diversity or synthesizability of the generated molecules, which constrains their practicality. Solutions such as Kajino (2019) resort to costly optimization on the level of molecules (or their latent representation) after the grammar is built or learned. For a more detailed delimitation of related work, see Sec. 2. + +In this paper, we propose a generative model combining complex graph grammar construction with a relatively simple and effective learning technique. In particular, our grammar incorporates substructures of varying sizes (i.e., above atom level) and the construction process directly optimizes various chemical metrics (e.g., distribution statistics and synthesizability) while satisfying specific chemical constraints (e.g., valency restrictions). Moreover, we have the benefits of symbolic knowledge representation: explainability and data efficiency. Our evaluation focuses on polymers, particularly their monomer building blocks. Thus, we curate new, realistic polymer datasets gathered from literature that represent specific classes of monomers. Note that our model works for arbitrary molecules. + +**Framework.** Figure 1 outlines our approach. Given molecules and domain-specific metrics to be optimized, we iteratively construct and evaluate a graph grammar as our generative model. We consider the construction as a minimum spanning forest problem and combine it with optimization of the metrics, via a learnable function $\mathcal{F}_{\theta}$ determining which rules to construct. + +Results. Our model successfully deals with extreme settings – from a DL perspective – learning meaningful production rules based on only $\sim 10$ samples (e.g., of a specific class); this is of significant importance in a practical setting (Stanley et al., 2021; Altae-Tran et al., 2017) but has been ignored so far. In particular, our model is able to generate members of a specific monomer class with a decent success rate. No previous state-of-the-art system achieves similar performance in our experiments. Generally, our approach works on training data that is orders of magnitude smaller than the amount needed by DL-based systems to produce meaningful results. Given $\sim 100$ samples, we achieve performance comparable to state-of-the-art systems trained on 81k samples, across a wide range of common and new evaluation metrics. Besides, our grammar optimization method can adjust to any user-defined domain-specific metrics. The learned grammars can capture domain-specific knowledge explicitly, e.g., the characteristic functional groups of a given class of polymers can be extracted from the production rules. + +# 2 RELATED WORK + +Generative Models for Molecules. Existing models can be categorized based on their molecule representation. It is common to use SMILES strings (Weininger, 1988). Yang et al. (2017), Olivecrona et al. (2017), Chenthamarakshan et al. (2020), Schiff et al. (2021) and Gómez-Bombarelli et al. (2018) use recurrent neural networks to generate the strings in a sequential manner. For instance, Gómez-Bombarelli et al. (2018) propose a VAE based on sequence-to-sequence encoders and decoders. Guimaraes et al. (2017), Sanchez-Lengeling et al. (2017), and Dai et al. (2018) additionally apply optimization objectives to enforce similarity or semantic correctness to generate valid SMILES. Alternatively, molecules can be treated as graphs. Simonovsky & Komodakis (2018), Ma et al. (2018), and De Cao & Kipf (2018) generate the graphs in a single step, directly outputting adjacency matrices and node labels. GraphNVP (Madhawa et al., 2019) uses two steps to generate the node labels based on the adjacency matrix; it exploits a model to encode molecules and uses the same model whose layers are reversed to generate new molecules. You et al. (2018); Li et al. (2018); Samanta et al. (2020); Liu et al. (2018); Liao et al. (2019); Jin et al. (2018; 2020) build molecule graphs iteratively based on either atom nodes or substructures. For example, JT-VAE (Jin et al., 2018) first generates a tree and then expands some of its nodes by attaching substructures based on a vocabulary mined from given molecules; Similarly, HierVAE (Jin et al., 2020) also uses a vocabulary, but improves upon the combinatorial attachment process by directly applying a multi-resolution representation, allowing for generating much larger and diverse molecules. All these methods depend on DL and require large training datasets. Moreover, the resulting models are usually not interpretable. Our approach alleviates this obstacle and can deal with practically common setting of only dozens of available data points. + +Generative Models using Graph Grammars. Our approach belongs to the class of models generating new graphs based on the production rules of a graph grammar. These models are non-parametric, interpretable, and can incorporate meaningful graph properties into the rules. One option is to use grammars that are manually designed by human experts (Dai et al., 2018; Nigam et al., 2021). For instance, STONED (Nigam et al., 2021) generates new molecules based on a string-based representation of given molecules by replacing tokens (representing atoms) in accordance with the SELFIES grammar (Krenn et al., 2019). While the simplicity of this approach is attractive, the generated molecules are not necessary reasonable from a chemical perspective (e.g., they might not be synthesizable). This problem can be overcome by mining the grammar from real data. Recently, several automatic techniques that explicitly construct a graph grammar have been proposed in the context of large-scale graphs to facilitate understanding and analysis (Sikdar et al., 2019; Aguinaga et al., 2018; Hibshman et al., 2019). These works are domain-independent and do not allow specialization of the constructed grammar to reflect domain-specific knowledge. Closest to our method is MHG (Kajino, 2019), a framework that constructs a molecular hyperedge-replacement grammar based on Aguinaga et al. (2018). From the given data, MHG learns a fine-grained grammar where the rules iteratively attach single atoms and therefore incorporates hard chemical constraints (e.g., valency restrictions). MHG then applies a VAE conditioned on the grammar to the molecules' tree decompositions in order to also learn soft constraints (e.g., stability). Finally, it uses Bayesian optimization to guide molecule generation. The fine-grained rules in MHG allows for rich diversity. However, our experiments show that these rules make it hard to capture distribution-specific properties, e.g., characteristic substructures of a specific molecular class. Such a drawback limits the practical use of MHG. In contrast, our approach focuses on subgraphs instead of individual atoms only. It directly incorporates domain-specific knowledge into grammar construction and therefore avoids complicated post-processing to achieve high generation performance. + +# 3 PRELIMINARIES + +Molecular Hypergraph. Molecules can naturally be represented as graphs by taking the atoms as nodes and the bonds as edges. We particularly use a hypergraph representation. Given a molecule $M$ , the hypergraph $H_{M} = (V, E_{H})$ consists of a set of nodes $V$ and a set of hyperedges $E_{H}$ ; a hyperedge can join more than two nodes. Given a regular molecular graph, we construct a hypergraph by including all nodes and adding hyperedges as follows: a hyperedge is added for each bond that joins only two nodes, and for each individual ring (including aromatic ones) that joins all nodes (more than 2) in the ring. Consider Figure 2(a) where we add two hyperedges of the latter kind, one for each ring. + +![](images/d832ca2350f916690891755c125cd7bb55949b36e343ca5996c9c2f61c444559.jpg) +Figure 2: Examples of a molecular hypergraph, one of our possible graph grammars for it, and an application of the grammar for generating new molecules. + +Formal Grammar. A grammar $G = (\mathcal{N}, \Sigma, \mathcal{P}, \mathcal{X})$ has a finite set $\mathcal{N}$ of non-terminal symbols, an initial symbol $\mathcal{X}$ , and a finite set of terminal symbols $\Sigma$ . It describes how to build strings from a language's alphabet using a set of production rules $\mathcal{P} = \{p_i | i = 1, \dots, k\}$ of form $p_i : LHS \to RHS$ , where $LHS$ is short for left-hand side and $RHS$ for right-hand side. Based on such a grammar, a string (of terminal symbols) is generated starting at $\mathcal{X}$ , by iteratively selecting rules whose left-hand side matches a non-terminal symbol inside the current string and replacing that with the rule's right-hand side, until the string does not contain any non-terminals. + +Minimum Spanning Forest (MSF). A minimum spanning tree (MST) is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices, without any cycles and with the minimum possible total edge weight. Minimum spanning forest is the union of MSTs for a set of unconnected edge-weighted undirected graphs. + +# 4 GRAPH GRAMMAR LEARNING WITH DOMAIN-SPECIFIC OPTIMIZATION + +Graph Grammar. We focus on a formal grammar over molecule graphs instead of strings, a graph grammar. As shown in Figure 2(b), both left and right-hand side of each production rule are graphs. These graphs contain non-terminal nodes (shadowed squares) and terminal nodes (colored circles), representing atoms. The white nodes are anchor nodes, which do not change from the left to the right-hand side. Molecule graph generation based on a graph grammar is analogous to the one with string-based grammars described in Sec. 3 (see also Figure 2(c)). To determine if a production rule $p_i$ is applicable at each step, we use subgraph matching (Gentner, 1983) to test whether the current graph contains a subgraph that is isomorphic to the rule's left-hand side. Since the subgraphs are usually small in scale, the matching process is efficient in practice. + +Overall Pipeline. As shown in Figure 1, the input to our algorithm consists of a set of molecular structures and a set of evaluation metrics (e.g., diversity and synthesizability). The goal is to learn a graph grammar which can be used for molecule generation. To this end, we first consider each molecule as a hypergraph. The grammar construction is a bottom-up procedure, which iteratively creates production rules by contracting hyperedges (shown in Figure 3). The hyperedges to contract are determined by a parameterized function $\mathcal{F}_{\theta}$ , implemented as a neural network. We simultaneously perform multiple randomized searches to obtain multiple grammars which are evaluated with respect to the input metrics. This approach learns how to create a grammar that samples molecules maximizing the input metrics. Hence, domain-specific knowledge can be incorporated to the grammar-based generative model. The grammar construction and learning process are described in detail in the following Sec. 4.1 and 4.2. The final molecule generation is detailed in Appendix A. + +![](images/ff9372631f07f2373f6ad4fe256c7a315545880f4899885ab5f6dc1501504d94.jpg) +Figure 3: Overview of bottom-up grammar construction. We optimize the iterative, bottom-up grammar construction by learning how to create a grammar that samples molecules fitting input metrics. Specifically we learn which edges to select for contraction in each iteration step using a neural network $\mathcal{F}_{\theta}$ . We perform this construction on all input molecules simultaneously. + +# 4.1 BOTTOM-UP GRAMMAR CONSTRUCTION + +We describe at a high-level our grammar construction approach (shown in Figure 3). A bottom-up search builds up production rules from the finest-grained level that comprises individual hyperedges in the molecular hypergraph. We construct a grammar by iteratively sampling a set of hyperedges and contracting them into an individual node. The sampling algorithm is described in more detail in Sec. 4.2. For each contraction step, a production rule is constructed and added to the grammar and we obtain a new hypergraph with fewer nodes and edges. We simultaneously perform the hyperedge selection and rule construction for all input molecules until all hyperedges are contracted. Without the loss of generality, we describe the construction process for a single molecule. + +At iteration $t$ , we consider the current graph $H_{M,t} = (V,E_H)$ and sample $m$ hyperedges $E_t^* = \{e_t^{(i)}\in E_H|i = 1,\dots,m\}$ . Let $V_{t}^{*} = \{v_{t}^{(i)}\in V|i = 1,\dots,n\}$ be the nodes joined by these hyperedges. We then extract all connected components with respect to these hyperedges. Next, we convert each connected component $H_{sub,t}^{(i)} = (V_{sub,t}^{(i)},E_{sub,t}^{(i)})$ into a production rule. The anchor nodes are those nodes from $V$ that are connected to nodes from $H_{sub,t}^{(i)}$ in the original graph $H_{M}$ but that do not occur in $H_{sub,t}^{(i)}$ themselves. We also provide notation for the relevant edges: + +$$ +V _ {a n c, t} ^ {(i)} = \{v | (s, v) \in E _ {H}, s \in V _ {s u b, t} ^ {(i)}, v \notin V _ {s u b, t} ^ {(i)} \}, \qquad E _ {a n c, t} ^ {(i)} = \{(s, v) | s \in V _ {a n c, t} ^ {(i)}, v \in V _ {s u b, t} ^ {(i)} \}. +$$ + +Then we construct the production rule $p_i:LHS\to RHS$ with non-terminal node $\mathcal{R}^*$ on the left: + +$$ +L H S := H \left(V _ {L}, E _ {L}\right), V _ {L} = \left\{\mathcal {R} ^ {*} \right\} \cup V _ {\text {a n c}, t} ^ {(i)}, E _ {L} = \left\{\left(\mathcal {R} ^ {*}, v\right) | v \in V _ {\text {a n c}, t} ^ {(i)} \right\}, \tag {1} +$$ + +$$ +R H S := H \left(V _ {R}, E _ {R}\right), V _ {R} = V _ {s u b, t} ^ {(i)} \cup V _ {a n c, t} ^ {(i)}, E _ {R} = E _ {a n c, t} ^ {(i)} \cup E _ {s u b, t} ^ {(i)}. +$$ + +When all connected components have been converted into production rules, we update the original hypergraph to $H_{M,t+1}$ by replacing each connected component with the non-terminal node $\mathcal{R}^*$ . The above process continues until the hypergraph only consists of one single non-terminal node. For this finally constructed rule, we use the initial node $\mathcal{X}$ instead of $\mathcal{R}^*$ on the left-hand side. + +Our proposal has several properties: (1) As a generative model, the grammar can reproduce all input molecules. (2) Since the production rules are constructed from subgraphs of real molecules, valency conditions are naturally obeyed and all generated molecules are thus valid. (3) The generation not only interpolates the training data but also extrapolates to generate molecular structures outside the distribution of previously seen examples. (4) The constructed grammar roughly follows Chomsky normal form (Chomsky, 1959) and hence is easy to parse and can serve for explanation. + +# 4.2 OPTIMIZING GRAMMAR CONSTRUCTION + +So far, we have left open a critical question: how to optimize the grammar for the input metrics? Since the grammar construction is completely determined by the sequence of hyperedge sets selected, we can convert the optimization of the grammar into the optimization of the hyperedge selection sequence; in particular, note that there are no hyperparameters in addition. Thus, the variable of the optimization problem is the selection sequence with objective to maximize the input metrics. + +We formulate the search for a sequence of hyperedge sets as an MSF problem. The bottom-up grammar construction process can be considered as search for a spanning forest for all input graphs. Note that instead of the structure of MSF itself, we focus on the order of hyperedges added to the MSF. The order of hyperedges is determined by an edge-weight function $\mathcal{F}:\mathcal{E}_H\to \mathbb{R}$ mapping each hyperedge in every considered molecule hypergraphs to a scalar value. Optimizing the hyperedge selection is equivalent to optimizing the edge weight function. Note that this kind of function is similar to the concept of potential function in the field of graphical models (Barber, 2012). Our goal here is to learn a potential function $\mathcal{F}(\cdot)$ that maximizes the given evaluation metrics. + +We define our edge weight function as a parameterized function of hyperedge features as $\mathcal{F}(e;\theta) = \mathcal{F}_{\theta}(f(e))$ , where $f(\cdot)$ is a feature extractor function for individual hyperedges $e$ , and $\theta$ are the parameters of $\mathcal{F}(\cdot)$ to be optimized. There is no specific restriction on the choice of $f(\cdot)$ so we use a pretrained neural network in our experiments. Our optimization objective is $\max_{\theta}\sum_{i}\lambda_{i}\mathcal{M}_{i}$ , where $\mathcal{M}_i$ is the value of the $i$ -th input grammar metric, and $\lambda_{i}$ is the weight of the $i$ -th metric. Since most grammar metrics can only be obtained by evaluating a set of molecules generated by the grammar, it is impossible to get the gradient of $\mathcal{M}_i$ with respect to the parameters $\theta$ via the chain rule. We address this problem by formulating the process of MSF construction as Monte Carlo (MC) sampling. To obtain the gradient of non-differentiable evaluation metrics, we draw inspiration from task-based learning (Donti et al., 2017; Chen et al., 2019) and reinforcement learning, by using REINFORCE (Williams, 1992) to optimize the potential function. Specifically, we define a random variable $X:\Omega \to \{0,1\}$ on each hyperedge, where $X = 1(X = 0)$ means the hyperedge is (not) selected. In each iteration of grammar construction, $X$ follows a Bernoulli distribution, based on the probability $\phi (e;\theta)$ to sample $e$ : + +$$ +X \sim \operatorname {B e r n o u l l i} (\phi (e; \theta)), \quad \phi (e; \theta) = P (X = 1) = \sigma (- \mathcal {F} _ {\theta} (f (e))), \tag {2} +$$ + +where $\sigma(\cdot)$ is the sigmoid function. For $\phi(e; \theta)$ , the larger the weight of edge $e$ , the lower the probability of $e$ being sampled in the current iteration, which matches the target of MSF. We sample $X$ for each hyperedge and construct grammar production rules iteratively as described in Sec. 4.1. Suppose that the constructed grammar is $G$ . Since $G$ is determined by the sampled order of hyperedges, we have $p(G) = p(\mathcal{C}(\mathbf{X})) = p(\mathbf{X}) = \prod_{t} \prod_{j} \phi(e_t^{(j)}; \theta)^{X_t^{(j)}} (1 - \phi(e_t^{(j)}; \theta))^{1 - X_t^{(j)}}$ , where $\mathcal{C}(\cdot)$ is grammar construction process, $e_t^{(j)}$ is the $j$ -th hyperedge selected in the $t$ -th iteration, and $\mathbf{X}$ is the concatenation of selected hyperedges along all iterations. The optimization objective is: + +$$ +\max _ {\theta} \mathbb {E} _ {\mathbf {X}} \left[ \sum_ {i} \lambda_ {i} \mathcal {M} _ {i} (\mathbf {X}) \right]. \tag {3} +$$ + +We estimate the expectation with MC sampling and REINFORCE, approximating the gradient with respect to $\theta$ as: + +$$ +\begin{array}{l} \nabla_ {\theta} \mathbb {E} _ {\mathbf {X}} \Big [ \sum_ {i} \lambda_ {i} \mathcal {M} _ {i} (\mathbf {X}) \Big ] = \int_ {\mathbf {X}} \sum_ {i} \lambda_ {i} \nabla_ {\theta} p (\mathbf {X}) \mathcal {M} _ {i} (\mathbf {X}) \\ = \mathbb {E} _ {\mathbf {X}} \sum_ {i} \lambda_ {i} \nabla_ {\theta} \log (p (\mathbf {X})) \mathcal {M} _ {i} (\mathbf {X}) \mathrm {d} \mathbf {X} \tag {4} \\ \approx \frac {1}{N} \sum_ {n = 1} ^ {N} \sum_ {i} \lambda_ {i} \nabla_ {\theta} \log (p (\mathbf {X})) \mathcal {M} _ {i} (\mathbf {X}). \\ \end{array} +$$ + +We then apply gradient ascent in order to maximize the objective. Note that $\mathcal{M}_i(\mathbf{X})$ is normalized to zero mean for each sampling batch to reduce variance in training. The grammar construction, evaluation, and optimization process is repeated until $\theta$ converges or the iteration number exceeds a preset limit. Our approach reduces the complexity of the optimization variable space from being combinatorial (all possible orderings of selected hyperedges) to the dimension of the parameter $\theta$ . Such complexity is also much lower than other deep neural network generative models that are directly trained on the molecule dataset. + +# 5 EVALUATION + +Our evaluation investigates the following five questions: + +- How do SOTA models for molecule generation perform on realistic small monomer datasets? +- Is our approach effective in generating specific types of monomers that are synthesizable? +- How do the models perform on larger monomer datasets? +- Can our approach learn to weigh and optimize different metrics according to user needs? +- Can our grammar's explainability support applications, such as functional group extraction? + +# 5.1 EXPERIMENT SETUP + +Data. We use three small datasets, each representing a specific class of monomers, which we curate manually from the literature: Acrylates, Chain Extenders, and Isocyanates, containing only 32, 11, and 11 samples, respectively (printed in Appendix G). For comparison and for pretraining baselines, we also use a large collection of 81k monomers from St. John et al. (2019) and Jin et al. (2020). + +Evaluation Metrics. We consider commonly used metrics in the literature (Polykovskiy et al., 2020) and new ones for assessing both individual sample quality and distribution similarity: + +- Validity/Uniqueness/Novelty: Percentage of chemically valid/unique/novel molecules. +- Diversity: Average pairwise molecular distance among generated molecules, where the molecular distance is defined as the Tanimoto distance over Morgan fingerprints (Rogers & Hahn, 2010). +- Chamfer Distance (Fan et al., 2017): Distance between two sets of molecules, wherein the usual pairwise Euclidean distance is replaced by the Tanimoto distance. +- Retro* Score (RS): Success rate of the Retro* model (Chen et al., 2020) which was trained to find a retrosynthesis path to build a molecule from a list of commercially available ones $^2$ . +- Membership: Percentage of molecules belonging to the training data's monomer class. + +We propose the last three as new metrics. We define the Chamfer Distance for molecules to account for "external" diversity between generated data and training data as quantitative indicator of the generative model's extrapolation ability. Ideally, generated molecules should not be too close to any training molecule. Hence, a larger distance is more desired. Note that the commonly used metrics of existing distances only focus on similarity to the training data. Furthermore, we include the Retro* Score as a metric estimating sample synthesizability since the commonly used SA Score (Ertl & Schuffenhauer, 2009) does not adequately assess more recent molecules (Polykovskiy et al., 2020). For our small datasets, we check class membership as well. Since a polymer's class is usually determined by its monomer types, it is essential that a generative model can generate class members. + +Baselines. We compare to various approaches: GraphNVP, JT-VAE, HierVAE, MHG, and STONED; for descriptions see Sec. 2.3 We call our method DEG, short for Data-Efficient Graph Grammar. Appendix B provides the implementation details. + +# 5.2 RESULTS ON SMALL, CLASS-SPECIFIC POLYMER DATA + +Results. Table 1 shows the results on the Isocyanate data; due to lack of space the other two tables are in Appendix C.1. Observe that GraphNVP has a rather poor performance. The VAEs and existing grammar-based systems perform reasonably well on some metrics, but have low scores on the RS and Membership metrics. In contrast, our method significantly outperforms the other methods in terms of Membership and Retro* Score on all three datasets. It also achieves the best or comparable performance on all other metrics. + +Discussion. Our evaluation shows that learning on monomer datasets, especially when the dataset size is small, is much more challenging than larger datasets as used in the related work (Irwin et al., 2012; Ramakrishnan et al., 2014). GraphNVP shows poor performance since it uses molecular graph adjacency matrix as model input, which is extremely sparse for our relatively large monomer structures. The vocabulary-based JT-VAE and HierVAE perform reasonably well. However, when + +Table 1: Results on Isocyanates, best bold, second-best underlined; we omit Novelty since all methods achieved $100\%$ ; the few valid molecules generated by GraphNVP did not allow for reasonable evaluation on some metrics $(-)$ . + +
MethodValidUniqueDiv.ChamferRSMemb.
Train data100%100%0.610.00100%100%
GraphNVP0.16%---0.00%0.00%
JT-VAE100%5.8%0.720.855.50%66.5%
HierVAE100%99.6%0.830.761.85%0.05%
MHG100%75.9%0.880.832.97%12.1%
STONED100%100%0.850.865.63%79.8%
DEG100%100%0.860.8727.2%96.3%
+ +the dataset is small, JT-VAE is not able to mine a vocabulary that would allow it to generate many unique molecules. The more diverse vocabulary of HierVAE clearly solves this shortcoming, but the low Membership score shows that it does not capture monomer class specifics. In general, grammar-based methods can better capture class-specific molecule characteristics than DL-based methods and have higher Membership scores. However, they perform poorly on RS. Specifically, MHG has fine-grained rules that simply attaches atoms. This leads to high diversity but the rules hardly capture domain-specific characteristics. The STONED method iteratively replaces atoms based on the SELFIES grammar and only performs interpolation and exploration based on training data, making it hard to embed build-in domain-specific knowledge in the generative model. The overall results show that: 1) Our learned, substructure-based grammar successfully captures class specifics – a critical evaluation criterion which has been ignored so far. 2) Other critical, domain-specific metrics such as RS can successfully be optimized during grammar learning. Our score is $\geq 5\times$ higher than the others. More importantly, the optimization is done in-situ during grammar construction, and hence it avoids post-processing. 3) Our method is the only one that constantly achieves stable performance. Altogether, these results clearly differentiate us from the others. + +# 5.3 RESULTS ON LARGE POLYMER DATASET + +Our method is developed to model specific classes of complex molecules (e.g., classes of different monomers or polymers) and is expected to deal with most practical scenarios with only a few dozen samples for training. However, as mentioned above, monomer data itself is different from the molecules used in related works. Therefore, we also investigate how our method performs on large monomer datasets comparing with existing methods. Since our approach is relatively complex, but more data-efficient, we apply it to a $0.15\%$ subset. Details are provided in Appendix B. + +Results. Table 3 in the appendix shows the results. In summary, some SOTA systems such as SMILESVAE and GraphNVP fail to capture any distribution specifics and mostly generate invalid molecules. JT-VAE and grammar-based baselines (MHG, STONED) perform poorly with respect to the former but their sample quality is reasonable. HierVAE performs extremely well on all metrics except Chamfer distance. Our approach can generally compete with the latter (only trained on $0.15\%$ data) and achieves better sample quality, especially Chamfer distance is twice as high. + +Discussion. Monomer data turns out to be much more challenging compared to the common datasets. Generally, DL-based methods achieve better performance in terms of distribution statistics, while grammar-based models (including ours) have better sample quality. It is reasonable since DL-based methods are all based on distribution learning while grammar-based methods are more focused on modeling chemical rules. Our approach is the only grammar-based system performing well on distribution statistics, which highlights the importance of grammar construction. Fine-grained grammars that either iteratively attaches single atoms (MHG), or only performs input data interpolation (STONED) cannot fit training data more specifically. Our sample quality is among the best. Fur + +![](images/1f07b289f8e353a20c1381fadd95e3bab24610e0494268a4264eeec88496a30f.jpg) +(a) Analysis on Isocyanate dataset + +![](images/3a1273e48d7dbec40f772d051c5b03ff0e6ebf489341819ad8d62f39fac8a033.jpg) +(b) Analysis on Acrylate dataset +Figure 4: Left: Analysis of balance factor $\lambda$ . We choose 9 different combinations of $\lambda_{i}$ for two optimization objectives: Diversity and RS, showing a clear trade-off between the two objectives. Right: Examples generated by our learned graph grammar. Our graph grammar can generate novel complex molecular structures that do not exist in the training dataset (e.g., cyclooctane). + +thermore, we also show that with more training data (0.3% of the whole dataset), our method can achieve better performance. + +# 5.4 ANALYSIS + +Optimizing for Specific Metrics, Balance Factor $\lambda$ . We study the effect of $\lambda$ weighing the importance of metrics according to user needs. We choose 9 different combinations for two optimization objectives: Diversity and RS. $\lambda_{1}$ ranges from 0 to 2 with 0.25 as interval, while $\lambda_{2}$ ranges from 4 to 0 with 0.5 as interval. Figure 4 depicts results for Isocyanates and Acrylates. We see that $\lambda$ fulfills its purpose, as the performance of the two objectives can be well controlled. In our study, we use $\lambda_{1} = 1$ , $\lambda_{2} = 2$ for a balanced trade-off between Diversity and RS. + +Explainability Supports Applications, Functional Group Extraction. In Appendix F, we show production rules of three graph grammars learned from three small polymer datasets. For each grammar, there is clearly a rule capturing the functional group that characterizes the dataset's corresponding monomer class. For example, $p_3$ is the relevant production rule for Isocyanates. Since functional groups must be present in all monomers of this type, the relevant rule is easily obtained by selecting the rule shared by all inputs. + +Generated Examples. Figure 4 also shows examples generated using our grammars learned on Isocyanates and Acrylates. Though we have only 32 and 11 training samples respectively, our graph grammar can be used to generate novel and complex molecules. For example, cyclooctane is not contained in the training data but our grammar can generate it by sequentially applying two partial ring formation rules. For more generated examples on the other datasets, see Appendix C.2. + +# 6 CONCLUSIONS + +We propose a data-efficient generative model combining graph grammar construction with domain-specific optimization. Our grammar incorporates substructures of varying size and the construction directly optimizes various chemical metrics. Extensive experiments on three small size polymer datasets and a large polymer dataset demonstrate the effectiveness of our method. Our system is the only one that is capable of generating monomers in a specific class with a high success rate. It will be useful to incorporate property prediction models with our graph grammar for generating superior molecular candidates for practical use. + +# ACKNOWLEDGEMENT + +This work is supported by the MIT-IBM Watson AI Lab, and its member company, Evonik. + +# REFERENCES + +Salvador Aguinaga, David Chiang, and Tim Weninger. Learning hyperedge replacement grammars for graph generation. IEEE transactions on pattern analysis and machine intelligence, 41(3): 625-638, 2018. +Han Altae-Tran, Bharath Ramsundar, Aneesh S Pappu, and Vijay Pande. Low data drug discovery with one-shot learning. ACS central science, 3(4):283-293, 2017. +David Barber. Bayesian reasoning and machine learning. Cambridge University Press, 2012. +G Richard Bickerton, Gaia V Paolini, Jérémy Besnard, Sorel Muresan, and Andrew L Hopkins. Quantifying the chemical beauty of drugs. Nature chemistry, 4(2):90-98, 2012. +Binghong Chen, Chengtao Li, Hanjun Dai, and Le Song. Retro*: learning retrosynthetic planning with neural guided a* search. In International Conference on Machine Learning, pp. 1608-1616. PMLR, 2020. +Di Chen, Yada Zhu, Xiaodong Cui, and Carla P Gomes. Task-based learning via task-oriented prediction network with applications in finance. arXiv preprint arXiv:1910.09357, 2019. +Vijil Chenthamarakshan, Payel Das, Samuel Hoffman, Hendrik Strobelt, Inkit Padhi, Kar Wai Lim, Benjamin Hoover, Matteo Manica, Jannis Born, Teodororo Laino, et al. Cogmol: target-specific and selective drug design for Covid-19 using deep generative models. Advances in Neural Information Processing Systems, 33:4320-4332, 2020. +Noam Chomsky. On certain formal properties of grammars. Information and control, 2(2):137-167, 1959. +Hanjun Dai, Yingtao Tian, Bo Dai, Steven Skiena, and Le Song. Syntax-directed variational autoencoder for structured data. arXiv preprint arXiv:1802.08786, 2018. +Nicola De Cao and Thomas Kipf. Molgan: An implicit generative model for small molecular graphs. arXiv preprint arXiv:1805.11973, 2018. +Priya L Donti, Brandon Amos, and J Zico Kolter. Task-based end-to-end model learning in stochastic optimization. arXiv preprint arXiv:1703.04529, 2017. +Peter Ertl and Ansgar Schuffenhauer. Estimation of synthetic accessibility score of drug-like molecules based on molecular complexity and fragment contributions. Journal of cheminformatics, 1(1):1-11, 2009. +Haoqiang Fan, Hao Su, and Leonidas J Guibas. A point set generation network for 3d object reconstruction from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 605-613, 2017. +Dedre Gentner. Structure-mapping: A theoretical framework for analogy. Cognitive science, 7(2): 155-170, 1983. +Rafael Gómez-Bombarelli, Jennifer N Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Alán Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. ACS central science, 4(2):268-276, 2018. +Gabriel Lima Guimaraes, Benjamin Sanchez-Lengeling, Carlos Outeiral, Pedro Luis Cunha Farias, and Alán Aspuru-Guzik. Objective-reinforced generative adversarial networks (organ) for sequence generation models. arXiv preprint arXiv:1705.10843, 2017. + +Minghao Guo, Wan Shou, Liane Makatura, Timothy Erps, Michael Foshey, and Wojciech Matusik. Polygrammar: Grammar for digital polymer representation and generation. arXiv preprint arXiv:2105.05278, 2021. +Justus Hibshman, Satyaki Sikdar, and Tim Weninger. Towards interpretable graph modeling with vertex replacement grammars. In 2019 IEEE International Conference on Big Data (Big Data), pp. 770-779. IEEE, 2019. +Samuel C Hoffman, Vijil Chenthamarakshan, Kahini Wadhawan, Pin-Yu Chen, and Payel Das. Optimizing molecules using efficient queries from property evaluations. Nature Machine Intelligence, 4(1):21-31, 2022. +Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. arXiv preprint arXiv:1905.12265, 2019. +John J Irwin, Teague Sterling, Michael M Mysinger, Erin S Bolstad, and Ryan G Coleman. Zinc: a free tool to discover chemistry for biology. Journal of chemical information and modeling, 52 (7):1757-1768, 2012. +Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation. In International conference on machine learning, pp. 2323-2332. PMLR, 2018. +Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Hierarchical generation of molecular graphs using structural motifs. In International Conference on Machine Learning, pp. 4839-4848. PMLR, 2020. +Hiroshi Kajino. Molecular hypergraph grammar with its application to molecular optimization. In International Conference on Machine Learning, pp. 3183-3191. PMLR, 2019. +Mario Krenn, Florian Häse, A Nigam, Pascal Friederich, and Alán Aspuru-Guzik. Selfies: a robust representation of semantically constrained graphs with an example application in chemistry. arXiv preprint arXiv:1905.13741, 2019. +Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, and Peter Battaglia. Learning deep generative models of graphs. arXiv preprint arXiv:1803.03324, 2018. +Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, Charlie Nash, William L Hamilton, David Duvenaud, Raquel Urtasun, and Richard S Zemel. Efficient graph generation with graph recurrent attention networks. arXiv preprint arXiv:1910.00760, 2019. +Qi Liu, Miltiadis Allamanis, Marc Brockschmidt, and Alexander L Gaunt. Constrained graph variational autoencoders for molecule design. arXiv preprint arXiv:1805.09076, 2018. +Ruimin Ma and Tengfei Luo. Pi1m: a benchmark database for polymer informatics. Journal of Chemical Information and Modeling, 60(10):4684-4690, 2020. +Tengfei Ma, Jie Chen, and Cao Xiao. Constrained generation of semantically valid graphs via regularizing variational autoencoders. arXiv preprint arXiv:1809.02630, 2018. +Kaushalya Madhawa, Katushiko Ishiguro, Kosuke Nakago, and Motoki Abe. Graphnvp: An invertible flow model for generating molecular graphs. arXiv preprint arXiv:1905.11600, 2019. +Łukasz Maziarka, Agnieszka Pocha, Jan Kaczmarczyk, Krzysztof Rataj, Tomasz Danel, and Michal Warchol. Mol-cyclegan: a generative model for molecular optimization. Journal of Cheminformatics, 12(1):1-18, 2020. +Aditya Menon, James A Thompson-Colón, and Newell R Washburn. Hierarchical machine learning model for mechanical property predictions of polyurethane elastomers from small datasets. Frontiers in Materials, 6:87, 2019. +AkshitKumar Nigam, Robert Pollice, Mario Krenn, Gabriel dos Passos Gomes, and Alan Aspuru-Guzik. Beyond generative models: superfast traversal, optimization, novelty, exploration and discovery (stoned) algorithm for molecules using selfies. Chemical science, 2021. + +Marcus Olivecrona, Thomas Blaschke, Ola Engkvist, and Hongming Chen. Molecular de-novo design through deep reinforcement learning. Journal of cheminformatics, 9(1):1-14, 2017. +Daniil Polykovskiy, Alexander Zhebrak, Benjamin Sanchez-Lengeling, Sergey Golovanov, Oktai Tatanov, Stanislav Belyaev, Rauf Kurbanov, Aleksey Artamonov, Vladimir Aladinskiy, Mark Veselov, et al. Molecular sets (moses): a benchmarking platform for molecular generation models. Frontiers in pharmacology, 11:1931, 2020. +Raghunathan Ramakrishnan, Pavlo O Dral, Matthias Rupp, and O Anatole Von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. Scientific data, 1(1):1-7, 2014. +David Rogers and Mathew Hahn. Extended-connectivity fingerprints. Journal of chemical information and modeling, 50(5):742-754, 2010. +Bidisha Samanta, Abir De, Gourhari Jana, Vicenç Gómez, Pratim Kumar Chattaraj, Niloy Ganguly, and Manuel Gomez-Rodriguez. Nevae: A deep generative model for molecular graphs. Journal of machine learning research. 2020 Apr; 21 (114): 1-33, 2020. +Benjamin Sanchez-Lengeling, Carlos Outeiral, Gabriel L Guimaraes, and Alan Aspuru-Guzik. Optimizing distributions over molecular space: an objective-reinforced generative adversarial network for inverse-design chemistry (organic). 2017. +Boris Sattarov, Igor I Baskin, Dragos Horvath, Gilles Marcou, Esben Jannik Bjerrum, and Alexandre Varnek. De novo molecular design by combining deep autoencoder recurrent neural networks with generative topographic mapping. Journal of chemical information and modeling, 59(3): 1182-1196, 2019. +Yair Schiff, Vijil Chenthamarakshan, Samuel Hoffman, Karthikeyan Natesan Ramamurthy, and Payel Das. Augmenting molecular deep generative models with topological data analysis representations. arXiv preprint arXiv:2106.04464, 2021. +Michael D Shultz. Two decades under the influence of the rule of five and the changing properties of approved oral drugs: miniperspective. Journal of medicinal chemistry, 62(4):1701-1714, 2018. +Satyaki Sikdar, Justus Hibshman, and Tim Weninger. Modeling graphs with vertex replacement grammars. In 2019 IEEE International Conference on Data Mining (ICDM), pp. 558-567. IEEE, 2019. +Martin Simonovsky and Nikos Komodakis. Graphvae: Towards generation of small graphs using variational autoencoders. In International conference on artificial neural networks, pp. 412-422. Springer, 2018. +Peter C St. John, Caleb Phillips, Travis W Kemper, A Nolan Wilson, Yanfei Guan, Michael F Crowley, Mark R Nimlos, and Ross E Larsen. Message-passing neural networks for high-throughput polymer screening. The Journal of chemical physics, 150(23):234111, 2019. +Megan Stanley, John F Bronskill, Krzysztof Maziarz, Hubert Misztela, Jessica Lanini, Marwin Segler, Nadine Schneider, and Marc Brockschmidt. Fs-mol: A few-shot learning dataset of molecules. 2021. +Govindan Subramanian, Bharath Ramsundar, Vijay Pande, and Rajiah Aldrin Denny. Computational modeling of $\beta$ -secretase 1 (bace-1) inhibitors using ligand based approaches. Journal of chemical information and modeling, 56(10):1936-1949, 2016. +David Weininger. Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules. Journal of chemical information and computer sciences, 28(1):31-36, 1988. +Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3):229-256, 1992. +Youjun Xu, Kangjie Lin, Shiwei Wang, Lei Wang, Chenjing Cai, Chen Song, Luhua Lai, and Jianfeng Pei. Deep learning for molecular generation. Future medicinal chemistry, 11(6):567-597, 2019. + +Xiufeng Yang, Jinzhe Zhang, Kazuki Yoshizoe, Kei Terayama, and Koji Tsuda. Chemts: an efficient python library for de novo molecular generation. Science and technology of advanced materials, 18(1):972-976, 2017. +Jiaxuan You, Bowen Liu, Rex Ying, Vijay Pande, and Jure Leskovec. Graph convolutional policy network for goal-directed molecular graph generation. arXiv preprint arXiv:1806.02473, 2018. + +# A DETAILS ON THE MOLECULE GENERATION + +Given a learned grammar, we condition the generation strategy based on (non)terminal symbols in the rules: during molecule generation, we exponentially increase the probability of those production rules without non-terminal symbol on the right-hand side based on the iteration number. Formally, the probability to select a certain production rule $r$ at iteration $t$ is $p(r) = Z^{-1} \exp(\alpha t x_r)$ , where $x_r$ is a binary value indicating whether rule $r$ contains only terminal symbols on the right-hand side, and $Z$ is a normalization factor. We used $\alpha = 0.5$ in our experiments, since it reduces the generation time sufficiently while maintaining satisfactory diversity. + +Note that we initially experimented with uniform random sampling of rules during testing. However, the possibility of generating arbitrarily large molecules (i.e., by choosing production rules with a non-terminal symbol on the right-hand side more likely) sometimes resulted in a never-ending generation process, a problem in practice. + +# B DETAILS ON THE IMPLEMENTATION + +Evaluation Metrics. For the large polymer dataset, we consider 4 additional evaluation metrics, measuring distribution statistics of the generated molecules, which are commonly used in DL-based methods. The metrics are: + +- Octanol-Water Partition Coefficient (logP): A ratio of a chemical's concentration in the octanol phase to its concentration in the aqueous phase of a two-phase octanol/water system. +- Synthetic Accessibility Score (SA) Estimate of how hard or easy it is to synthesize a given molecule based on the molecule's fragments (Ertl & Schuffenhauer, 2009). +- Quantitative Estimation of Drug-likeness (QED): Estimate of how likely a molecule is to be a viable drug candidate, meant to capture aesthetics in medicinal chemistry (Bickerton et al., 2012). +- Molecular Weight (MW): Sum of atomic weights in a molecule. Differences in the histograms for the generated and real data may reveal bias towards generating lighter or heavier molecules. + +Note that, while SA Score and QED are not considered useful sample quality heuristics for more recent molecules, they are still useful as distribution statistics (Polykovskiy et al., 2020; Shultz, 2018). + +Baselines. For STONED, as suggested by the authors, we first generate the chemical space for each sample in the dataset5. Then we take all samples scoring higher than 0.8 and randomly sample the remaining ones to obtain our 10k samples. For MHG, we learn a hypergraph grammar on the dataset following the paper's implementation and generate molecules by randomly sampling and deploying production rules. For other baselines we mainly use the settings suggested in the original papers. For the experiments on the small datasets, SMILESVAE, GraphNVP and HierVAE are pretrained on the large dataset and thereafter fine-tuned on specific small dataset; the other baseline implementations do not support pretraining, hence we train them from scratch on the small datasets. + +Our System. For our approach, we choose a pretrained graph neural network (Hu et al., 2019) as our feature extractor $f(\cdot)$ . Note that we do not finetune its parameters during training and it can be replaced by any plug-and-play feature extractors. For the potential function $\mathcal{F}_{\theta}$ , we use a two-layer fully connected network with size 300 and 128. For the optimization objectives, we consider two metrics: diversity and RS. For hyperparameters, we set MC sampling size as 5. We use the Adam optimizer to train the two-layer network with learning rate 0.01. We trained for 20 epochs. + +We train the model and construct grammar on each dataset separately from scratch. For the large polymer dataset, we only use 117 training samples to optimize the grammar. This is motivated by the fact that only 436 different motifs exist in the training dataset as stated in Jin et al. (2020). We can then construct a subset consisting of 436 molecules that can cover these 436 motifs by random sampling. We then sample from this subset to get the training set for our method. For the basic setting of our method, we sample 117 training molecules. We also consider the scenario with more data, where we extend the former set to 239 samples. + +After training, we generated 10k samples per dataset for evaluation. For the large polymer dataset, besides the naively generated 10k samples, we further construct another set of generated molecules + +Table 2: Results on Acrylates and Chain Extenders (best bolded, second-best underlined). The low validity of molecules generated by GraphNVP did not allow for reasonable evaluation on some metrics $(-)$ . + +
DatasetMethodValidUniqueDiv.ChamferRSMember.
AcrylatesTrain data100%100%0.670.00100%100%
GraphNVP0.00%-----
JT-VAE100%0.50%0.290.864.9%48.64%
HierVAE100%99.7%0.830.893.04%0.82%
MHG100%86.8%0.890.8436.8%0.93%
STONED99.9%99.8%0.840.8811.2%47.9%
DEG100%100%0.860.9243.9%69.6%
Chain ExtendersTrain data100%100%0.800.00100%100%
GraphNVP0.01%-----
JT-VAE100%2.3%0.620.782.20%79.6%
HierVAE100%99.8%0.830.912.69%43.6%
MHG100%87.4%0.900.8550.6%41.2%
STONED100%99.8%0.930.876.78%61.0%
DEG100%100%0.930.9467.5%93.5%
+ +Table 3: Results on the large polymer dataset (best **bold**, second-best **underlined**). The low validity of molecules generated by GraphNVP and SMILESVAE did not allow for reasonable evaluation on some metrics $(-)$ . Our method DEG was trained on $0.15\%$ and $0.3\%$ of the train data. DEG (fitting) is evaluated using the selected 10k samples via the fitting process described in Appendix B. + +
MethodDistribution Statistics (↓)Sample Quality (↑)
logPSAQEDMWValidUniqueDiv.Chamfer
Train data0.120.020.0022.98100%100%0.830.00
SMILESVAE9.632.990.19751.60.01%---
GraphNVP2.940.650.03435.60.23%---
JT-VAE2.930.320.10210.1100%83.9%0.880.50
HierVAE0.500.080.0242.45100%99.9%0.820.32
MHG9.201.910.10380.3100%100%0.910.56
STONED2.430.810.07179.999.9%100%0.830.45
DEG (0.15%, fitting)1.800.250.0269.0100%100%0.820.60
DEG (0.15%)5.520.510.20334.2100%100%0.860.62
DEG (0.3%, fitting)1.930.230.0270.2100%100%0.850.62
DEG (0.3%)5.640.410.19311.4100%100%0.880.63
+ +to have a fair comparison with DL-based methods on distribution statistics. We perform a fitting process selecting 10k data points from 100k generated samples using the same learned grammar in order to get a better performance on distribution statistics. Specifically, we calculate the four evaluation metrics of distribution statistics for both the training dataset and the 100k generated samples; + +each data point can have a corresponding 4-dimensional vector. We perform $k$ -means clustering of all the vectors of the training dataset with $k = 10,000$ . Then we partition the 4-dimensional space into $k$ regions using Voronoi diagram and treat each region 'equal mass'. For each region, we sample one data point from our generated samples that lie in the region. Thus, we construct a set of $10k$ generated samples and evaluate it the same way as the other methods. + +# C ADDITIONAL RESULTS + +# C.1 QUANTITATIVE RESULTS + +We report the results on the Acrylate and Chain Extender datasets in Table 2. Since SMILESVAE cannot achieve reasonable validity for all three small datasets, we omit it in the tables. Our method significantly outperforms the others in terms of Retro* Score and Membership, while achieving best or comparable performance across all other metrics. + +The results for the large polymer dataset are reported in Table 3. As common practice (Jin et al., 2020; Polykovskiy et al., 2020), we compute Frechet distance between property distributions of molecules in the generated and test sets for distribution statistics. All but our two new evaluation metrics are computed using MOSES (Polykovskiy et al., 2020). Since the building blocks of polymer data in this dataset are from a special database (St. John et al., 2019), which differs significantly from the emolecule database Retro* is trained on, we do not report RS as it is less meaningful. Our method achieves remarkable performances across all evaluation metrics. Note that we only use 117 data points while the others are fully trained on the whole dataset. + +# C.2 EXAMPLES OF GENERATED RESULTS + +![](images/044d40eba03d4aef7f6624e0c5d1bc54d13c91fdbfe6aac67a1d4c2a8db6ff22.jpg) +(a) Examples generated by the graph grammar learned on large polymer dataset + +![](images/282dbf51959bc978ca8df9013567461e9008bea82f81218ddf0b25f4d603d032.jpg) +(b) Examples generated by the graph grammar learned on Chain Extender dataset +Figure 5: Examples of generated results using our learned graph grammar. + +# D SWITCHING THE FEATURE EXTRACTOR + +In order to demonstrate the plug-and-play capability of our system's feature extractor, we provide experimental results where we use most embeddings from the deepchem package6. As it can be seen in Figure 6, this simple feature extractor yields higher performance than the pretrained GNN at the start of the optimization, but cannot improve further through the whole learning process. In order to obtain a reasonable optimized grammar, an ideal feature extractor should capture both the local and + +the global information of the hyperedges in the hypergraph. Hence, the GNN fits better our purpose of grammar construction. + +![](images/bb6389870f14606dd67c4df93a0de50d3f64cbe39c6f3f0ce840be668bedd088.jpg) +Figure 6: Comparison with simple feature extractor on Isocyanates. + +![](images/c8b85df18c794609819d5436449a76a4cf3aebf399d39677ae2bef4202c24307.jpg) + +# E DEMONSTRATION OF STABILITY OF REINFORCE + +Since REINFORCE is known to have large variance, we provide results on the stability of our proposed DEG. We run our algorithm with three random seeds on the Isocyanates data. The experimental results are shown in Figure 7. The diversity scores are relatively stable across the three random seeds. The RS scores vary more but all three experiments converge to a similar value around the end of optimization. We also show convergence curves for the other three datasets in Figure 8. + +![](images/9d65206aeea747bfe06ea5e09a08101433d4d5ea566a92a04503af35b53fca8e.jpg) +Figure 7: Analysis of stability of proposed DEG. + +![](images/9ac98ef9e08f0c37fb489a5de416cf8101f0ed6d0a3153899c603c5c0dcedd09.jpg) + +![](images/958883adc12bc87085e98c907d031060610156bf0067b1f32a54489da4092517.jpg) +(a) Acrylates + +![](images/ae33aafc06dd0e85ba6311204fb6e09cbc237b484a8ff7d79e58374ee84551c9.jpg) +(b) Chain Extenders + +![](images/b7939ccf96b99543f136c52ab5b05a3e06a5490749e963824ec75c808f604d43.jpg) +(c) Polymer Dataset $(0.15\%)$ +Figure 8: Convergence curves on three datasets. + +# F LEARNED GRAPH GRAMMAR + +![](images/85649f622f0345eb3b5cceb6d836e04902b9fe46403ca5f6aaec7382854b997d.jpg) +(a) Examples of production rules learned from Isocyanates dataset + +![](images/8f153f1a0cb04b1ccce21ecfe05f83c1c379924d9e307d7723d7f73fa8bf6760.jpg) +(b) Examples of production rule learned from Acrylates dataset + +![](images/e52d9f686ffd51ca3239248fa36f61392b19a7dea4e082649a2ff492f83d2c55.jpg) +(c) Examples of production rule learned from Chain Extenders dataset +Figure 9: Examples of production rules from our learned graph grammar. + +# G OUR DATASETS + +# G.1 ISOCYANATES + +MDI $0 = C = NC1 = CC = CC$ (CC2=CC=C(C=C2N=C=O)CC3=CC=C(C=C3)N=C=O)=C1 + +MDIO=C=NC1=CC(CC2=C(C=C(C=C2)CC3=CC=C(C=C3N=C=O)CC4=CC=C(C=C4) $\mathrm{N} = \mathrm{C} = \mathrm{O})\mathrm{N} = \mathrm{C} = \mathrm{O}) = \mathrm{CC} = \mathrm{C}1$ + +MDI $0 = C = NC1 = CC = C$ $(C = C1)$ CC2=CC=C $(C = C2N = C = O)$ CC3=C $(C = C$ (C=C3)CC4=C $C = C$ $(C = C4N = C = O)$ CC5=CC=C $(C = C5)$ N $= C = 0$ )N $= C = 0$ + +HDI $0 = C = NCCCCCN = C = O$ + +HDI $0 = C = NCCCCCCCCCCC$ CCN $\equiv$ C=O + +HDI $0 = C = NCCCCCCCCCCCCCCCCCCCCCCCN = C = O$ + +HDI $0 = C = NCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCN = C = 0$ + +IPDI CC1 (CC (CC (CN=C=O) (C1) C) N=C=O) C + +TDI CC1=C (C=C (C=C1) CN=C=O) N=C=O + +HMDI $0 = C = NC1CCC$ (CC2CCC (CC2) $\mathrm{N} = \mathrm{C} = 0$ CC1 + +LDI CCOC (C $(\mathrm{N} = \mathrm{C} = \mathrm{O})$ CCCCN $= \mathrm{C} = \mathrm{O}) = 0$ + +# G.2 ACRYLATES + +Benzyl Acrylate $C = C C(= O)O C C1 = C C = C C = C 1$ + +Phenyl Acrylate $C = C C(= O)O C1 = C C = C C = C 1$ + +Phenyl Methacrylate CC $(= C)$ C $(= 0)$ OC1 $= \mathrm{CC} = \mathrm{CC} = \mathrm{C}1$ + +2-Phenylethyl Acrylate C=CC $(= 0)$ OCCC1 $= \mathrm{CC} = \mathrm{CC} = \mathrm{C}1$ + +n-Octyl Methacrylate CCCCCCCCOC $(= 0)$ C $(= C)$ C + +Sec-Butyl Acrylate CCC (C) OC $(= 0)$ C=C + +Benzyl Methacrylate CC $(= C)$ C $(= 0)$ OCC1 $= \mathrm{CC} = \mathrm{CC} = \mathrm{C}1$ + +Pentafluorophenyl acrylate $C = C$ $(= O)$ OC1=C (C $(= C$ (C $(= C1F)$ F) F) F + +```lisp +iso-Butyl methacrylate CC (C) COC $(=O)$ C $(=C)$ C +n-Dodecyl methacrylate CCCCCCCCCCCCC $(=O)$ C $(=C)$ C +sec-Butyl methacrylate CCC (C) OC $(=O)$ C $(=C)$ C +n-Propyl methacrylate CCCOC $(=O)$ C $(=C)$ C +3,3,5-Trimethylcyclohexyl methacrylate CC1CC (CC (C1) (C) C) OC $(=O)$ C $(=C)$ C +iso-Decyl acrylate CC (C) CCCCCCCOC $(=O)$ C=C +n-Propyl acrylate CCCOC $(=O)$ C=C +2-Methoxyethyl acrylate COCCOC $(=O)$ C=C +2-Phenoxyethyl methacrylate CC $(=C)$ C $(=O)$ OCCOC1 $=CC = CC = C1$ +n-Hexyl acrylate CCCCCCOC $(=O)$ C=C +2-n-Butoxyethyl methacrylate CCCCOCCOC $(=O)$ C $(=C)$ C +Methyl Methacrylate CC $(=C)$ C $(=O)$ OC +Methyl Acrylate COC $(=O)$ C=C +Butyl Arylate CCCOC $(=O)$ C=C +2-Ethoxyethyl methacrylate COCCOC $(=O)$ C $(=C)$ C +Isobornyl methacrylate CC $(=C)$ C $(=O)$ OC1CC2CCC1 (C2 (C) C) C +2-Ethylhexyl methacrylate CCCCC (CC) COC $(=O)$ C $(=C)$ C +Neopentyl glycol propoxyl diacrylate CC (C) (COCCCOC $(=O)$ C=C) COCCCOC $(=O)$ C=C +1,6-Hexanediol diacrylate C=CC $(=O)$ OCCCCOC $(=O)$ C=C +Pentaerythritol triacylate C=CC $(=O)$ OCC (CO) (COC $(=O)$ C=C) COC $(=O)$ C=C +Trimethylolpropane propoxyl triacrylate +CCC (COCCCOC $(=O)$ C=C) (COCCCOC $(=O)$ C=C) COCCCOC $(=O)$ C=C +Di(trimethylolpropane) tetraacrylate +CCC (COCC (CC) (COC $(=O)$ C=C) COC $(=O)$ C=C) (COC $(=O)$ C=C) COC $(=O)$ C=C +Dipentaerythritol pentaacrylate +C=CC $(=O)$ OCC (CO) (COCC (COC $(=O)$ C=C) (COC $(=O)$ C=C) COC $(=O)$ C=C) COC $(=O)$ C=C +Dipentaerythritol hexaacrylate +C=CC $(=O)$ OCC (COCC (COC $(=O)$ C=C) (COC $(=O)$ C=C) COC $(=O)$ C=C) (COC $(=O)$ C=C) COC $(=O)$ C=C +``` + +# G.3 CHAIN EXTENDERS + +EG OCCO + +1,3-BD OC(C)CCO + +BD OCCCCO + +AE-H-AE OCCNC $(= 0)$ NCCCCCNC $(= 0)$ NCCO + +AE-L-AE OCCN1C $(= 0)$ NC (C1 $(= 0)$ ) CCCCNC $(= 0)$ NCCO + +D-E-D Oc1ccc (cc1) CCC $(= 0)$ OCCOC $(= 0)$ CCclccc (cc1)O + +LYS OC $(= 0)$ C (N) CCCCN + +L-Orn OC $(= 0)$ C (N) CCCN + +Pip N1CCNCC1 + +AFD Nc1ccc(cc1) SSc2ccc(cc2)N + +MDA Nc1ccc (cc1)Cc2ccc (cc2)N \ No newline at end of file diff --git a/dataefficientgraphgrammarlearningformoleculargeneration/images.zip b/dataefficientgraphgrammarlearningformoleculargeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fc947ac9d6f789f93f20712b3ab2f6a8ca5f9380 --- /dev/null +++ b/dataefficientgraphgrammarlearningformoleculargeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1be1856984dd06e5d9b731c8a6e5b81d0eee8155ab632adc76e3e21c58653981 +size 663422 diff --git a/dataefficientgraphgrammarlearningformoleculargeneration/layout.json b/dataefficientgraphgrammarlearningformoleculargeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..57611237cf982c1b2efd669d1f47de2574382682 --- /dev/null +++ b/dataefficientgraphgrammarlearningformoleculargeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09f8d3e1a11dfe73eb0725a915879660e4917a73b36d5506ec3cfd908174ac81 +size 618211 diff --git a/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/d5934f88-4ccf-4ce6-a0d1-16a2966e9f93_content_list.json b/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/d5934f88-4ccf-4ce6-a0d1-16a2966e9f93_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f3d1fc026ddd0b789721f503620dd2484bb9de9b --- /dev/null +++ b/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/d5934f88-4ccf-4ce6-a0d1-16a2966e9f93_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbb65e5811339f4b618fa48c27bcbf3f2fa28d2e10f98fd71304d999dc50b041 +size 167245 diff --git a/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/d5934f88-4ccf-4ce6-a0d1-16a2966e9f93_model.json b/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/d5934f88-4ccf-4ce6-a0d1-16a2966e9f93_model.json new file mode 100644 index 0000000000000000000000000000000000000000..19067868e28449047437a3a52770efbec3d41056 --- /dev/null +++ b/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/d5934f88-4ccf-4ce6-a0d1-16a2966e9f93_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b7916137835919e0aae73a5cbae9ca754b11e4bc6a765bcdad4913510c008c18 +size 192522 diff --git a/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/d5934f88-4ccf-4ce6-a0d1-16a2966e9f93_origin.pdf b/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/d5934f88-4ccf-4ce6-a0d1-16a2966e9f93_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..07d1be11d82f6452de7e9503b26ae67d3ba9c2bd --- /dev/null +++ b/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/d5934f88-4ccf-4ce6-a0d1-16a2966e9f93_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:365aaab0011a6a70b76d6460bdc47613ad3a9729ee9614b9690b03700611820f +size 945803 diff --git a/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/full.md b/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3c0427e04a967bee875af45515f087264ba4b1cc --- /dev/null +++ b/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/full.md @@ -0,0 +1,793 @@ +# DIFFUSION-BASED VOICE CONVERSION WITH FAST MAXIMUM LIKELIHOOD SAMPLING SCHEME + +Vadim Popov, Ivan Vovk, Vladimir Gogoryan, + +Huawei Noah's Ark Lab, Moscow, Russia + +{vadim.popov,vovk.ivan,gogoryan.vladimir}@huawei.com + +Tasnima Sadekova, Mikhail Kudinov & Jiansheng Wei + +Huawei Noah's Ark Lab, Moscow, Russia + +{sadekova.tasnima,kudinov.mikhail,weijiansheng}@huawei.com + +# ABSTRACT + +Voice conversion is a common speech synthesis task which can be solved in different ways depending on a particular real-world scenario. The most challenging one often referred to as one-shot many-to-many voice conversion consists in copying target voice from only one reference utterance in the most general case when both source and target speakers do not belong to the training dataset. We present a scalable high-quality solution based on diffusion probabilistic modeling and demonstrate its superior quality compared to state-of-the-art one-shot voice conversion approaches. Moreover, focusing on real-time applications, we investigate general principles which can make diffusion models faster while keeping synthesis quality at a high level. As a result, we develop a novel Stochastic Differential Equations solver suitable for various diffusion model types and generative tasks as shown through empirical studies and justify it by theoretical analysis. + +# 1 INTRODUCTION + +Voice conversion (VC) is the task of copying the target speaker's voice while preserving the linguistic content of the utterance pronounced by the source speaker. Practical VC applications often require a model which is able to operate in one-shot mode (i.e. when only one reference utterance is provided to copy the target speaker's voice) for any source and target speakers. Such models are usually referred to as one-shot many-to-many models (or sometimes zero-shot many-to-many models, or just any-to-any VC models). It is challenging to build such a model since it should be able to adapt to a new unseen voice having only one spoken utterance pronounced with it, so it was not until recently that successful one-shot VC solutions started to appear. + +Conventional one-shot VC models are designed as autoencoders whose latent space ideally contains only the linguistic content of the encoded utterance while target voice identity information (usually taking shape of speaker embedding) is fed to the decoder as conditioning. Whereas in the pioneering AutoVC model (Qian et al., 2019) only speaker embedding from the pre-trained speaker verification network was used as conditioning, several other models improved on AutoVC enriching conditioning with phonetic features such as pitch and loudness (Qian et al., 2020; Nercessian, 2020), or training voice conversion and speaker embedding networks jointly (Chou & Lee, 2019). Also, several papers (Lin et al., 2021; Ishihara & Saito, 2020; Liu et al., 2021b) made use of attention mechanism to better fuse specific features of the reference utterance into the source utterance thus improving the decoder performance. Apart from providing the decoder with sufficiently rich information, one of the main problems autoencoder VC models face is to disentangle source speaker identity from speech content in the encoder. Some models (Qian et al., 2019; 2020; Nercessian, 2020) solve this problem by introducing an information bottleneck. Among other popular solutions of the disentanglement problem one can mention applying vector quantization technique to the content information (Wu et al., 2020; Wang et al., 2021), utilizing features of Variational AutoEncoders (Luong & Tran, 2021; Saito et al., 2018; Chou & Lee, 2019), introducing instance normalization layers (Chou & Lee, 2019; Chen et al., 2021b), and using Phonetic Posteriorgrams (PPGs) (Nercessian, 2020; Liu et al., 2021b). + +The model we propose in this paper solves the disentanglement problem by employing the encoder predicting "average voice": it is trained to transform mel features corresponding to each phoneme into mel features corresponding to this phoneme averaged across a large multi-speaker dataset. As for decoder, in our VC model, it is designed as a part of a Diffusion Probabilistic Model (DPM) since this class of generative models has shown very good results in speech-related tasks like raw waveform generation (Chen et al., 2021a; Kong et al., 2021) and mel feature generation (Popov et al., 2021; Jeong et al., 2021). However, this decoder choice poses a problem of slow inference because DPM forward pass scheme is iterative and to obtain high-quality results it is typically necessary to run it for hundreds of iterations (Ho et al., 2020; Nichol & Dhariwal, 2021). Addressing this issue, we develop a novel inference scheme that significantly reduces the number of iterations sufficient to produce samples of decent quality and does not require model re-training. Although several attempts have been recently made to reduce the number of DPM inference steps (Song et al., 2021a; San-Roman et al., 2021; Watson et al., 2021; Kong & Ping, 2021; Chen et al., 2021a), most of them apply to some particular types of DPMs. In contrast, our approach generalizes to all popular kinds of DPMs and has a strong connection with likelihood maximization. + +This paper has the following structure: in Section 2 we present a one-shot many-to-many VC model and describe DPM it relies on; Section 3 introduces a novel DPM sampling scheme and establishes its connection with likelihood maximization; the experiments regarding voice conversion task as well as those demonstrating the benefits of the proposed sampling scheme are described in Section 4; we conclude in Section 5. + +# 2 VOICE CONVERSION DIFFUSION MODEL + +As with many other VC models, the one we propose belongs to the family of autoencoders. In fact, any conditional DPM with data-dependent prior (i.e. terminal distribution of forward diffusion) can be seen as such: forward diffusion gradually adding Gaussian noise to data can be regarded as encoder while reverse diffusion trying to remove this noise acts as a decoder. DPMs are trained to minimize the distance (expressed in different terms for different model types) between the trajectories of forward and reverse diffusion processes thus, speaking from the perspective of autoencoders, minimizing reconstruction error. Data-dependent priors have been proposed by Popov et al. (2021) and Lee et al. (2021), and we follow the former paper due to the flexibility of the continuous DPM framework used there. Our approach is summarized in Figure 1. + +![](images/d22ac36b554058ce850fbe3a2e5bf0781dcf9da2069b5934041fc68c1db4b285.jpg) +Figure 1: VC model training and inference. $Y$ stands for the training mel-spectrogram at training and the target mel-spectrogram at inference. Speaker conditioning in the decoder is enabled by the speaker conditioning network $g_{t}(Y)$ where $Y = \{Y_{t}\}_{t\in [0,1]}$ is the whole forward diffusion trajectory starting at $Y_{0}$ . Dotted arrows denote operations performed only at training. + +# 2.1 ENCODER + +We choose average phoneme-level mel features as speaker-independent speech representation. To train the encoder to convert input mel-spectrograms into those of "average voice", we take three steps: (i) first, we apply Montreal Forced Aligner (McAuliffe et al., 2017) to a large-scale multi-speaker LibriTTS dataset (Zen et al., 2019) to align speech frames with phonemes; (ii) next, we obtain average mel features for each particular phoneme by aggregating its mel features across the whole LibriTTS dataset; (iii) the encoder is then trained to minimize mean square error between output mel-spectrograms and ground truth "average voice" mel-spectrograms (i.e. input mel-spectrograms where each phoneme mel feature is replaced with the average one calculated on the previous step). + +The encoder has exactly the same Transformer-based architecture used in Grad-TTS (Popov et al., 2021) except that its inputs are mel features rather than character or phoneme embeddings. Note that unlike Grad-TTS the encoder is trained separately from the decoder described in the next section. + +# 2.2 DECODER + +Whereas the encoder parameterizes the terminal distribution of the forward diffusion (i.e. the prior), the reverse diffusion is parameterized with the decoder. Following Song et al. (2021c) we use Itô calculus and define diffusions in terms of stochastic processes rather than discrete-time Markov chains. + +The general DPM framework we utilize consists of forward and reverse diffusions given by the following Stochastic Differential Equations (SDEs): + +$$ +d X _ {t} = \frac {1}{2} \beta_ {t} (\bar {X} - X _ {t}) d t + \sqrt {\beta_ {t}} d \overrightarrow {W _ {t}}, \tag {1} +$$ + +$$ +d \hat {X} _ {t} = \left(\frac {1}{2} (\bar {X} - \hat {X} _ {t}) - s _ {\theta} (\hat {X} _ {t}, \bar {X}, t)\right) \beta_ {t} d t + \sqrt {\beta_ {t}} d \overleftarrow {W} _ {t}, \tag {2} +$$ + +where $t \in [0,1]$ , $\overleftrightarrow{W}$ and $\overleftarrow{W}$ are two independent Wiener processes in $\mathbb{R}^n$ , $\beta_{t}$ is non-negative function referred to as noise schedule, $s_\theta$ is the score function with parameters $\theta$ and $\bar{X}$ is $n$ -dimensional vector. It can be shown (Popov et al., 2021) that the forward SDE (1) allows for explicit solution: + +$$ +\operatorname {L a w} \left(X _ {t} \mid X _ {0}\right) = \mathcal {N} \left(e ^ {- \frac {1}{2} \int_ {0} ^ {t} \beta_ {s} d s} X _ {0} + \left(1 - e ^ {- \frac {1}{2} \int_ {0} ^ {t} \beta_ {s} d s}\right) \bar {X}, \left(1 - e ^ {- \int_ {0} ^ {t} \beta_ {s} d s}\right) I\right), \tag {3} +$$ + +where $\mathrm{I}$ is $n\times n$ identity matrix. Thus, if noise follows linear schedule $\beta_{t} = \beta_{0} + t(\beta_{1} - \beta_{0})$ for $\beta_0$ and $\beta_{1}$ such that $e^{-\int_0^1\beta_sds}$ is close to zero, then Law $(X_{1})$ is close to $\mathcal{N}(\bar{X},\mathrm{I})$ which is the prior in this DPM. The reverse diffusion (2) is trained by minimizing weighted $L_{2}$ loss: + +$$ +\theta^ {*} = \underset {\theta} {\arg \min } \mathcal {L} (\theta) = \underset {\theta} {\arg \min } \int_ {0} ^ {1} \lambda_ {t} \mathbb {E} _ {X _ {0}, X _ {t}} \| s _ {\theta} (X _ {t}, \bar {X}, t) - \nabla \log p _ {t | 0} (X _ {t} | X _ {0}) \| _ {2} ^ {2} d t, \tag {4} +$$ + +where $p_{t|0}(X_t|X_0)$ is the probability density function (pdf) of the conditional distribution (3) and $\lambda_t = 1 - e^{-\int_0^t\beta_sds}$ . The distribution (3) is Gaussian, so we have + +$$ +\nabla \log p _ {t | 0} (X _ {t} | X _ {0}) = - \frac {X _ {t} - X _ {0} e ^ {- \frac {1}{2} \int_ {0} ^ {t} \beta_ {s} d s} - \bar {X} (1 - e ^ {- \frac {1}{2} \int_ {0} ^ {t} \beta_ {s} d s})}{1 - e ^ {- \int_ {0} ^ {t} \beta_ {s} d s}}. \tag {5} +$$ + +At training, time variable $t$ is sampled uniformly from [0, 1], noisy samples $X_{t}$ are generated according to the formula (3) and the formula (5) is used to calculate loss function $\mathcal{L}$ on these samples. Note that $X_{t}$ can be sampled without the necessity to calculate intermediate values $\{X_{s}\}_{0 < s < t}$ which makes optimization task (4) time and memory efficient. A well-trained reverse diffusion (2) has trajectories that are close to those of the forward diffusion (1), so generating data with this DPM can be performed by sampling $\hat{X}_1$ from the prior $\mathcal{N}(\bar{X},\mathrm{I})$ and solving SDE (2) backwards in time. + +The described above DPM was introduced by Popov et al. (2021) for text-to-speech task and we adapt it for our purposes. We put $\bar{X} = \varphi(X_0)$ where $\varphi$ is the encoder, i.e. $\bar{X}$ is the "average voice" mel-spectrogram which we want to transform into that of the target voice. We condition the decoder $s_\theta = s_\theta(\hat{X}_t, \bar{X}, g_t(Y), t)$ on some trainable function $g_t(Y)$ to provide it with information about the target speaker ( $Y$ stands for forward trajectories of the target mel-spectrogram at inference and the ones of the training mel-spectrogram at training). This function is a neural network trained jointly with the decoder. We experimented with three input types for this network: + +- $d$ -only - the input is the speaker embedding extracted from the target mel-spectrogram $Y_{0}$ with the pre-trained speaker verification network employed in (Jia et al., 2018); +- wodyn – in addition, the noisy target mel-spectrogram $Y_{t}$ is used as input; +- whole - in addition, the whole dynamics of the target mel-spectrogram under forward diffusion $\{Y_s|s = 0.5 / 15, 1.5 / 15,.., 14.5 / 15\}$ is used as input. + +The decoder architecture is based on U-Net (Ronneberger et al., 2015) and is the same as in Grad-TTS but with four times more channels to better capture the whole range of human voices. The speaker conditioning network $g_{t}(Y)$ is composed of 2D convolutions and MLPs and described in detail in Appendix H. Its output is 128-dimensional vector which is broadcast-concatenated to the concatenation of $\hat{X}_{t}$ and $\bar{X}$ as additional 128 channels. + +# 2.3 RELATED VC MODELS + +To the best of our knowledge, there exist two diffusion-based voice conversion models: VoiceGrad (Kameoka et al., 2020) and DiffSVC (Liu et al., 2021a). The one we propose differs from them in several important aspects. First, neither of the mentioned papers considers a one-shot many-to-many voice conversion scenario. Next, these models take no less than 100 reverse diffusion steps at inference while we pay special attention to reducing the number of iterations (see Section 3) achieving good quality with as few as 6 iterations. Furthermore, VoiceGrad performs voice conversion by running Langevin dynamics starting from the source mel-spectrogram, thus implicitly assuming that forward diffusion trajectories starting from the mel-spectrogram we want to synthesize are likely to pass through the neighborhood of the source mel-spectrogram on their way to Gaussian noise. Such an assumption allowing to have only one network instead of encoder-decoder architecture is too strong and hardly holds for real voices. Finally, DiffSVC performs singing voice conversion and relies on PPGs as speaker-independent speech representation. + +# 3 MAXIMUM LIKELIHOOD SDE SOLVER + +In this section, we develop a fixed-step first-order reverse SDE solver that maximizes the log-likelihood of sample paths of the forward diffusion. This solver differs from general-purpose Euler-Maruyama SDE solver (Kloeden & Platen, 1992) by infinitesimally small values which can however become significant when we sample from diffusion model using a few iterations. + +Consider the following forward and reverse SDEs defined in Euclidean space $\mathbb{R}^n$ for $t\in [0,1]$ + +$$ +d X _ {t} = - \frac {1}{2} \beta_ {t} X _ {t} d t + \sqrt {\beta_ {t}} d \overrightarrow {W _ {t}} (F), \quad d \hat {X} _ {t} = \left(- \frac {1}{2} \beta_ {t} \hat {X} _ {t} - \beta_ {t} s _ {\theta} (\hat {X} _ {t}, t)\right) d t + \sqrt {\beta_ {t}} d \overleftarrow {W _ {t}} (R), \tag {6} +$$ + +where $\overrightarrow{W}$ is a forward Wiener process (i.e. its forward increments $\overrightarrow{W_t} -\overrightarrow{W_s}$ are independent of $\overrightarrow{W_s}$ for $t > s$ ) and $\overleftarrow{W}$ is a backward Wiener process (i.e. backward increments $\overleftarrow{W_s} -\overleftarrow{W_t}$ are independent of $\overleftarrow{W_t}$ for $s < t$ ). Following Song et al. (2021c) we will call DPM (6) Variance Preserving (VP). For simplicity we will derive maximum likelihood solver for this particular type of diffusion models. The equation (1) underlying VC diffusion model described in Section 2 can be transformed into the equation (6-F) by a constant shift and we will call such diffusion models Mean Reverting Variance Preserving (MR-VP). VP model analysis carried out in this section can be easily extended (see Appendices D, E and F) to MR-VP model as well as to other common diffusion model types such as sub-VP and VE described by Song et al. (2021c). + +The forward SDE (6-F) allows for explicit solution: + +$$ +\mathrm {L a w} (X _ {t} | X _ {s}) = \mathcal {N} (\gamma_ {s, t} X _ {s}, (1 - \gamma_ {s, t} ^ {2}) \mathrm {I}), \quad \gamma_ {s, t} = \exp \left(- \frac {1}{2} \int_ {s} ^ {t} \beta_ {u} d u\right), \qquad (7) +$$ + +for all $0 \leq s < t \leq 1$ . This formula is derived by means of Ito calculus in Appendix A. The reverse SDE (6-R) parameterized with a neural network $s_{\theta}$ is trained to approximate gradient of the log-density of noisy data $X_{t}$ : + +$$ +\theta^ {*} = \arg \min _ {\theta} \int_ {0} ^ {1} \lambda_ {t} \mathbb {E} _ {X _ {t}} \| s _ {\theta} (X _ {t}, t) - \nabla \log p _ {t} (X _ {t}) \| _ {2} ^ {2} d t, \tag {8} +$$ + +where the expectation is taken with respect to noisy data distribution $\mathrm{Law}(X_t)$ with pdf $p_t(\cdot)$ and $\lambda_t$ is some positive weighting function. Note that certain Lipschitz constraints should be satisfied by coefficients of SDEs (6) to guarantee existence of strong solutions (Liptser & Shiryaev, 1978), and throughout this section we assume these conditions are satisfied as well as those from (Anderson, 1982) which guarantee that paths $\hat{X}$ generated by the reverse SDE (6-R) for the optimal $\theta^*$ equal forward SDE (6-F) paths $X$ in distribution. + +The generative procedure of a VP DPM consists in solving the reverse SDE (6-R) backwards in time starting from $\hat{X}_1\sim \mathcal{N}(0,\mathrm{I})$ . Common Euler-Maruyama solver introduces discretization error (Kloeden & Platen, 1992) which may harm sample quality when the number of iterations is small. At the same time, it is possible to design unbiased (Henry-Labordere et al., 2017) or even exact (Beskos & Roberts, 2005) numerical solvers for some particular SDE types. The Theorem 1 shows that in the case of diffusion models we can make use of the forward diffusion (6-F) and propose a reverse SDE solver which is better than the general-purpose Euler-Maruyama one in terms of likelihood. + +The solver proposed in the Theorem 1 is expressed in terms of the values defined as follows: + +$$ +\mu_ {s, t} = \gamma_ {s, t} \frac {1 - \gamma_ {0 , s} ^ {2}}{1 - \gamma_ {0 , t} ^ {2}}, \quad \nu_ {s, t} = \gamma_ {0, s} \frac {1 - \gamma_ {s , t} ^ {2}}{1 - \gamma_ {0 , t} ^ {2}}, \quad \sigma_ {s, t} ^ {2} = \frac {(1 - \gamma_ {0 , s} ^ {2}) (1 - \gamma_ {s , t} ^ {2})}{1 - \gamma_ {0 , t} ^ {2}}, \qquad (9) +$$ + +$$ +\kappa_ {t, h} ^ {*} = \frac {\nu_ {t - h , t} \left(1 - \gamma_ {0 , t} ^ {2}\right)}{\gamma_ {0 , t} \beta_ {t} h} - 1, \quad \omega_ {t, h} ^ {*} = \frac {\mu_ {t - h , t} - 1}{\beta_ {t} h} + \frac {1 + \kappa_ {t , h} ^ {*}}{1 - \gamma_ {0 , t} ^ {2}} - \frac {1}{2}, \tag {10} +$$ + +$$ +(\sigma_ {t, h} ^ {*}) ^ {2} = \sigma_ {t - h, t} ^ {2} + \frac {1}{n} \nu_ {t - h, t} ^ {2} \mathbb {E} _ {X _ {t}} \left[ \mathrm {T r} \left(\mathrm {V a r} \left(X _ {0} | X _ {t}\right)\right) \right], +$$ + +where $n$ is data dimensionality, $\operatorname{Var}(X_0|X_t)$ is the covariance matrix of the conditional data distribution $\mathrm{Law}(X_0|X_t)$ (so, $\mathrm{Tr}(\mathrm{Var}(X_0|X_t))$ is the overall variance across all $n$ dimensions) and the expectation $\mathbb{E}_{X_t}[\cdot]$ is taken with respect to the unconditional noisy data distribution $\mathrm{Law}(X_t)$ . + +Theorem 1. Consider a DPM characterized by SDEs (6) with reverse diffusion trained till optimality. Let $N \in \mathbb{N}$ be any natural number and $h = 1 / N$ . Consider the following class of fixed step size $h$ reverse SDE solvers parameterized with triplets of real numbers $\{(\hat{\kappa}_{t,h},\hat{\omega}_{t,h},\hat{\sigma}_{t,h})|t = h,2h,..,1\}$ : + +$$ +\hat {X} _ {t - h} = \hat {X} _ {t} + \beta_ {t} h \left(\left(\frac {1}{2} + \hat {\omega} _ {t, h}\right) \hat {X} _ {t} + \left(1 + \hat {\kappa} _ {t, h}\right) s _ {\theta^ {*}} (\hat {X} _ {t}, t)\right) + \hat {\sigma} _ {t, h} \xi_ {t}, \tag {11} +$$ + +where $\theta^{*}$ is given by (8), $t = 1,1 - h,\dots,h$ and $\xi_{t}$ are i.i.d. samples from $\mathcal{N}(0,\mathrm{I})$ . Then: + +(i) Log-likelihood of sample paths $X = \{X_{kh}\}_{k=0}^{N}$ under generative model $\hat{X}$ is maximized for $\hat{\kappa}_{t,h} = \kappa_{t,h}^*$ , $\hat{\omega}_{t,h} = \omega_{t,h}^*$ and $\hat{\sigma}_{t,h} = \sigma_{t,h}^*$ . + +(ii) Assume that the SDE solver (11) starts from random variable $\hat{X}_1 \sim \mathrm{Law}(X_1)$ . If $X_0$ is a constant or a Gaussian random variable with diagonal isotropic covariance matrix (i.e. $\delta^2\mathrm{I}$ for $\delta > 0$ ), then generative model $\hat{X}$ is exact for $\hat{\kappa}_{t,h} = \kappa_{t,h}^{*}$ , $\hat{\omega}_{t,h} = \omega_{t,h}^{*}$ and $\hat{\sigma}_{t,h} = \sigma_{t,h}^{*}$ . + +Table 1: Input types for speaker conditioning $g_{t}(Y)$ compared in terms of speaker similarity. + +
Diff-LibriTTSDiff-VCTK
d-onlywodynwholed-onlywodynwhole
Most similar27.0%38.0%34.1%27.2%46.7%23.6%
Least similar28.9%29.3%38.5%25.3%23.9%48.6%
+ +The Theorem 1 provides an improved DPM sampling scheme which comes at no additional computational cost compared to standard methods (except for data-dependent term in $\sigma^{*}$ as discussed in Section 4.3) and requires neither model re-training nor extensive search on noise schedule space. The proof of this theorem is given in Appendix C. Note that it establishes optimality of the reverse SDE solver (11) with the parameters (10) in terms of likelihood of discrete paths $X = \{X_{kh}\}_{k=0}^{N}$ while the optimality of continuous model (6-R) on continuous paths $\{X_t\}_{t \in [0,1]}$ is guaranteed for a model with parameters $\theta = \theta^{*}$ as shown in (Song et al., 2021c). + +The class of reverse SDE solvers considered in the Theorem 1 is rather broad: it is the class of all fixed-step solvers whose increments at time $t$ are linear combination of $\hat{X}_t$ , $s_\theta(\hat{X}_t, t)$ and Gaussian noise with zero mean and diagonal isotropic covariance matrix. As a particular case it includes Euler-Maruyama solver $(\hat{\kappa}_{t,h} \equiv 0, \hat{\omega}_{t,h} \equiv 0, \hat{\sigma}_{t,h} \equiv \sqrt{\beta_t h})$ and for fixed $t$ and $h \to 0$ we have $\kappa_{t,h}^* = \bar{o}(1)$ , $\omega_{t,h}^* = \bar{o}(1)$ and $\sigma_{t,h}^* = \sqrt{\beta_t h} (1 + \bar{o}(1))$ (the proof is given in Appendix B), so the optimal SDE solver significantly differs from general-purpose Euler-Maruyama solver only when $N$ is rather small or $t$ has the same order as $h$ , i.e., on the final steps of DPM inference. Appendix G contains toy examples demonstrating the difference of the proposed optimal SDE solver and Euler-Maruyama one depending on step size. + +The result (ii) from the Theorem 1 strengthens the result $(i)$ for some particular data distributions, but it may seem useless since in practice data distribution is far from being constant or Gaussian. However, in case of generation with strong conditioning (e.g. mel-spectrogram inversion) the assumptions on the data distribution may become viable: in the limiting case when our model is conditioned on $c = \psi(X_0)$ for an injective function $\psi$ , random variable $X_0|c$ becomes a constant $\psi^{-1}(c)$ . + +# 4 EXPERIMENTS + +We trained two groups of models: Diff-VCTK models on VCTK (Yamagishi et al., 2019) dataset containing 109 speakers (9 speakers were held out for testing purposes) and Diff-LibriTTS models on LibriTTS (Zen et al., 2019) containing approximately 1100 speakers (10 speakers were held out). For every model both encoder and decoder were trained on the same dataset. Training hyperparameters, implementation and data processing details can be found in Appendix I. For mel-spectrogram inversion, we used the pre-trained universal HiFi-GAN vocoder (Kong et al., 2020) operating at $22.05\mathrm{kHz}$ . All subjective human evaluation was carried out on Amazon Mechanical Turk (AMT) with Master assessors to ensure the reliability of the obtained Mean Opinion Scores (MOS). In all AMT tests we considered unseen-to-unseen conversion with 25 unseen (for both Diff-VCTK and Diff-LibriTTS) speakers: 9 VCTK speakers, 10 LibriTTS speakers and 6 internal speakers. For VCTK source speakers we also ensured that source phrases were unseen during training. We place other details of listening AMT tests in Appendix J. A small subset of speech samples used in them is available at our demo page https://diffvc-fast-ml-solver.github.io which we encourage to visit. The code will soon be published at https://github.com/huawei-noah/Speech-Backbones. + +As for sampling, we considered the following class of reverse SDE solvers: + +$$ +\hat {X} _ {t - h} = \hat {X} _ {t} + \beta_ {t} h \left(\left(\frac {1}{2} + \hat {\omega} _ {t, h}\right) (\hat {X} _ {t} - \bar {X}) + (1 + \hat {\kappa} _ {t, h}) s _ {\theta} (\hat {X} _ {t}, \bar {X}, g _ {t} (Y), t)\right) + \hat {\sigma} _ {t, h} \xi_ {t}, \tag {12} +$$ + +where $t = 1,1 - h,..,h$ and $\xi_{t}$ are i.i.d. samples from $\mathcal{N}(0,\mathrm{I})$ . For $\hat{\kappa}_{t,h} = \kappa_{t,h}^{*}$ , $\hat{\omega}_{t,h} = \omega_{t,h}^{*}$ and $\hat{\sigma}_{t,h} = \sigma_{t,h}^{*}$ (where $\kappa_{t,h}^{*}$ , $\omega_{t,h}^{*}$ and $\sigma_{t,h}^{*}$ are given by (10)) it becomes maximum likelihood reverse SDE solver for MR-VP DPM (1-2) as shown in Appendix D. In practice it is not trivial to estimate variance of the conditional distribution Law $(X_0|X_t)$ , so we skipped this term in $\sigma_{t,h}^{*}$ + +Table 2: Subjective evaluation (MOS) of one-shot VC models trained on VCTK. Ground truth recordings were evaluated only for VCTK speakers. + +
VCTK test (9 speakers, 54 pairs)Whole test (25 speakers, 350 pairs)
NaturalnessSimilarityNaturalnessSimilarity
AGAIN-VC1.98 ± 0.051.97 ± 0.081.87 ± 0.031.75 ± 0.04
FragmentVC2.20 ± 0.062.45 ± 0.091.91 ± 0.031.93 ± 0.04
VQMIVC2.89 ± 0.062.60 ± 0.102.48 ± 0.041.95 ± 0.04
Diff-VCTK-ML-63.73 ± 0.063.47 ± 0.093.39 ± 0.042.69 ± 0.05
Diff-VCTK-ML-303.73 ± 0.063.57 ± 0.093.44 ± 0.042.71 ± 0.05
Ground truth4.55 ± 0.054.52 ± 0.074.55 ± 0.054.52 ± 0.07
+ +Table 3: Subjective evaluation (MOS) of one-shot VC models trained on large-scale datasets. + +
VCTK test (9 speakers, 54 pairs)Whole test (25 speakers, 350 pairs)
NaturalnessSimilarityNaturalnessSimilarity
Diff-LibriTTS-EM-61.68 ± 0.061.53 ± 0.071.57 ± 0.021.47 ± 0.03
Diff-LibriTTS-PF-63.11 ± 0.072.58 ± 0.112.99 ± 0.032.50 ± 0.04
Diff-LibriTTS-ML-63.84 ± 0.083.08 ± 0.113.80 ± 0.033.27 ± 0.05
Diff-LibriTTS-ML-303.96 ± 0.083.23 ± 0.114.02 ± 0.033.39 ± 0.05
BNE-PPG-VC3.95 ± 0.083.27 ± 0.123.83 ± 0.033.03 ± 0.05
+ +assuming this variance to be rather small because of strong conditioning on $g_{t}(Y)$ and just used $\hat{\sigma}_{t,h} = \sigma_{t - h,t}$ calling this sampling method $ML-N$ ( $N = 1/h$ is the number of SDE solver steps). We also experimented with Euler-Maruyama solver $EM-N$ (i.e. $\hat{\kappa}_{t,h} = 0$ , $\hat{\omega}_{t,h} = 0$ , $\hat{\sigma}_{t,h} = \sqrt{\beta_{t}h}$ ) and "probability flow sampling" from (Song et al., 2021c) which we denote by $PF-N$ ( $\hat{\kappa}_{t,h} = -0.5$ , $\hat{\omega}_{t,h} = 0$ , $\hat{\sigma}_{t,h} = 0$ ). + +# 4.1 SPEAKER CONDITIONING ANALYSIS + +For each dataset we trained three models - one for each input type for the speaker conditioning network $g_{t}(Y)$ (see Section 2.2). Although these input types had much influence neither on speaker similarity nor on speech naturalness, we did two experiments to choose the best models (one for each training dataset) in terms of speaker similarity for further comparison with baseline systems. We compared voice conversion results (produced by ML-30 sampling scheme) on 92 source-target pairs. AMT workers were asked which of three models (if any) sounded most similar to the target speaker and which of them (if any) sounded least similar. For Diff-VCTK and Diff-LibriTTS models each conversion pair was evaluated 4 and 5 times respectively. Table 1 demonstrates that for both Diff-VCTK and Diff-LibriTTS the best option is wodyn, i.e. to condition the decoder at time $t$ on the speaker embedding together with the noisy target mel-spectrogram $Y_{t}$ . Conditioning on $Y_{t}$ allows making use of diffusion-specific information of how the noisy target sounds whereas embedding from the pre-trained speaker verification network contains information only about the clean target. Taking these results into consideration, we used Diff-VCTK-wodyn and Diff-LibriTTS-wodyn in the remaining experiments. + +# 4.2 ANY-TO-ANY VOICE CONVERSION + +We chose four recently proposed VC models capable of one-shot many-to-many synthesis as the baselines: + +- AGAIN-VC (Chen et al., 2021b), an improved version of a conventional autoencoder AdaIN-VC solving the disentanglement problem by means of instance normalization; +- FragmentVC (Lin et al., 2021), an attention-based model relying on wav2vec 2.0 (Baevski et al., 2020) to obtain speech content from the source utterance; + +Table 4: Reverse SDE solvers compared in terms of FID. $N$ is the number of SDE solver steps. + +
VP DPMsub-VP DPMVE DPM
N=10N=100N=10N=100N=10N=100
Euler-Maruyama229.619.68312.319.83462.124.77
Reverse Diffusion679.865.95312.219.74461.1303.2
Probability Flow88.925.7064.224.42495.3214.5
Ancestral Sampling679.868.35454.717.83
Maximum Likelihood (τ = 0.1)260.34.34317.06.63461.923.63
Maximum Likelihood (τ = 0.5)24.457.8230.906.43462.010.07
Maximum Likelihood (τ = 1.0)41.787.9448.026.5148.5112.37
+ +- VQMIVC (Wang et al., 2021), state-of-the-art approach among those employing vector quantization techniques; +- BNE-PPG-VC (Liu et al., 2021b), an improved variant of PPG-based VC models combining a bottleneck feature extractor obtained from a phoneme recognizer with a seq2seq-based synthesis module. + +As shown in (Kim et al., 2021), PPG-based VC models provide high voice conversion quality competitive even with that of the state-of-the-art VC models taking text transcription corresponding to the source utterance as input. Therefore, we can consider BNE-PPG-VC a state-of-the-art model in our setting. + +Baseline voice conversion results were produced by the pre-trained VC models provided in official GitHub repositories. Since only BNE-PPG-VC has the model pre-trained on a large-scale dataset (namely, LibriTTS + VCTK), we did two subjective human evaluation tests: the first one comparing Diff-VCTK with AGAIN-VC, FragmentVC and VQMIVC trained on VCTK and the second one comparing Diff-LibriTTS with BNE-PPG-VC. The results of these tests are given in Tables 2 and 3 respectively. Speech naturalness and speaker similarity were assessed separately. AMT workers evaluated voice conversion quality on 350 source-target pairs on 5-point scale. In the first test, each pair was assessed 6 times on average both in speech naturalness and speaker similarity evaluation; as for the second one, each pair was assessed 8 and 9 times on average in speech naturalness and speaker similarity evaluation correspondingly. No less than 41 unique assessors took part in each test. + +Table 2 demonstrates that our model performs significantly better than the baselines both in terms of naturalness and speaker similarity even when 6 reverse diffusion iterations are used. Despite working almost equally well on VCTK speakers, the best baseline VQMIVC shows poor performance on other speakers perhaps because of not being able to generalize to different domains with lower recording quality. Although Diff-VCTK performance also degrades on non-VCTK speakers, it achieves good speaker similarity of MOS 3.6 on VCTK ones when ML-30 sampling scheme is used and only slightly worse MOS 3.5 when 5x less iterations are used at inference. + +Table 3 contains human evaluation results of Diff-LibriTTS for four sampling schemes: ML-30 with 30 reverse SDE solver steps and ML-6, EM-6 and PF-6 with 6 steps of reverse diffusion. The three schemes taking 6 steps achieved real-time factor (RTF) around 0.1 on GPU (i.e. inference was 10 times faster than real time) while the one taking 30 steps had RTF around 0.5. The proposed model Diff-LibriTTS-ML-30 and the baseline BNE-PPG-VC show the same performance on the VCTK test set in terms of speech naturalness the latter being slightly better in terms of speaker similarity which can perhaps be explained by the fact that BNE-PPG-VC was trained on the union of VCTK and LibriTTS whereas our model was trained only on LibriTTS. As for the whole test set containing unseen LibriTTS and internal speakers also, Diff-LibriTTS-ML-30 outperforms BNE-PPG-VC model achieving MOS 4.0 and 3.4 in terms of speech naturalness and speaker similarity respectively. Due to employing PPG extractor trained on a large-scale ASR dataset LibriSpeech (Panayotov et al., 2015), BNE-PPG-VC has fewer mispronunciation issues than our model but synthesized speech suffers from more sonic artifacts. This observation makes us believe that incorporating PPG features in the proposed diffusion VC framework is a promising direction for future research. + +Table 3 also demonstrates the benefits of the proposed maximum likelihood sampling scheme over other sampling methods for a small number of inference steps: only $ML-N$ scheme allows us to + +use as few as $N = 6$ iterations with acceptable quality degradation of MOS 0.2 and 0.1 in terms of naturalness and speaker similarity respectively while two other competing methods lead to much more significant quality degradation. + +# 4.3 MAXIMUM LIKELIHOOD SAMPLING + +![](images/78b32d467c1e39278ce380bfd024340b16117307a336835b59d77afa8e65f627.jpg) +Figure 2: CIFAR-10 images randomly sampled from VP DPM by running 10 reverse diffusion steps with the following schemes (from left to right): "euler-maruyama", "probability flow", "maximum likelihood ( $\tau = 0.5$ )", "maximum likelihood ( $\tau = 1.0$ )". + +![](images/655a4c4dfc4028559738e59545e26cfc4422a068d5b8bb06f26a0b938b7ab55c.jpg) + +![](images/698e1718258bdc52f5735cd7e5495d0c47fd231de6c0a759d04001ee6d71eacf.jpg) + +![](images/b24c06ba9db2837e12067145e8e4fd14834664d436743e330f8164ce0303ca50.jpg) + +To show that the maximum likelihood sampling scheme proposed in Section 3 generalizes to different tasks and DPM types, we took the models trained by Song et al. (2021c) on CIFAR-10 image generation task and compared our method with other sampling schemes described in that paper in terms of Fréchet Inception Distance (FID). + +The main difficulty in applying maximum likelihood SDE solver is estimating data-dependent term $\mathbb{E}[\mathrm{Tr}\left(\mathrm{Var}\left(X_0|X_t\right)\right)]$ in $\sigma_{t,h}^{*}$ . Although in the current experiments we just set this term to zero, we can think of two possible ways to estimate it: (i) approximate $\mathrm{Var}\left(X_0|X_t\right)$ with $\mathrm{Var}\left(\hat{X}_0|\hat{X}_t = X_t\right)$ : sample noisy data $X_{t}$ , solve reverse SDE with sufficiently small step size starting from terminal condition $\hat{X}_{t} = X_{t}$ several times, and calculate sample variance of the resulting solutions at initial points $\hat{X}_0$ ; (ii) use the formula (58) from Appendix C to calculate $\mathrm{Var}\left(X_0|X_t\right)$ assuming that $X_0$ is distributed normally with mean and variance equal to sample mean and sample variance computed on the training dataset. Experimenting with these techniques and exploring new ones seems to be an interesting future research direction. + +Another important practical consideration is that the proposed scheme is proven to be optimal only for score matching networks trained till optimality. Therefore, in the experiments whose results are reported in Table 4 we apply maximum likelihood sampling scheme only when $t \leq \tau$ while using standard Euler-Maruyama solver for $t > \tau$ for some hyperparameter $\tau \in [0,1]$ . Such a modification relies on the assumption that score matching network is closer to being optimal for smaller noise. + +Table 4 shows that despite likelihood and FID are two metrics that do not perfectly correlate (Song et al., 2021b), in most cases our maximum likelihood SDE solver performs best in terms of FID. Also, it is worth mentioning that although $\tau = 1$ is always rather a good choice, tuning this hyperparameter can lead to even better performance. One can find randomly chosen generated images for various sampling methods in Figure 2. + +# 5 CONCLUSION + +In this paper, the novel one-shot many-to-many voice conversion model has been presented. Its encoder design and powerful diffusion-based decoder made it possible to achieve good results both in terms of speaker similarity and speech naturalness even on out-of-domain unseen speakers. Subjective human evaluation verified that the proposed model delivers scalable VC solution with competitive performance. Furthermore, aiming at fast synthesis, we have developed and theoretically justified the novel sampling scheme. The main idea behind it is to modify the general-purpose Euler-Maruyama SDE solver so as to maximize the likelihood of discrete sample paths of the forward diffusion. Due to the proposed sampling scheme, our VC model is capable of high-quality voice conversion with as few as 6 reverse diffusion steps. Moreover, experiments on the image generation task show that all known diffusion model types can benefit from the proposed SDE solver. + +# REFERENCES + +Brian D.O. Anderson. Reverse-time Diffusion Equation Models. Stochastic Processes and their Applications, 12(3):313 - 326, 1982. ISSN 0304-4149. +Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations. In Advances in Neural Information Processing Systems, volume 33, pp. 12449-12460. Curran Associates, Inc., 2020. +Alexandros Beskos and Gareth O. Roberts. Exact Simulation of Diffusions. The Annals of Applied Probability, 15(4):2422-2444, 2005. ISSN 10505164. +Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan. WaveGrad: Estimating Gradients for Waveform Generation. In International Conference on Learning Representations, 2021a. +Yen-Hao Chen, D. Wu, Tsung-Han Wu, and Hung yi Lee. Again-VC: A One-Shot Voice Conversion Using Activation Guidance and Adaptive Instance Normalization. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5954-5958, 2021b. +Ju-Chieh Chou and Hung-yi Lee. One-Shot Voice Conversion by Separating Speaker and Content Representations with Instance Normalization. In *Interspeech* 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15-19 September 2019, pp. 664-668. ISCA, 2019. +Pierre Henry-Labordère, Xiaolu Tan, and Nizar Touzi. Unbiased Simulation of Stochastic Differential Equations. The Annals of Applied Probability, 27(6):3305-3341, 2017. ISSN 10505164. +Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, volume 33. Curran Associates, Inc., 2020. +Aapo Hyvarinen. Estimation of Non-Normalized Statistical Models by Score Matching. Journal of Machine Learning Research, 6(24):695-709, 2005. +Tatsuma Ishihara and Daisuke Saito. Attention-Based Speaker Embeddings for One-Shot Voice Conversion. In *Interspeech* 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pp. 806-810. ISCA, 2020. +Myeonghun Jeong, Hyeongju Kim, Sung Jun Cheon, Byoung Jin Choi, and Nam Soo Kim. Diff-TTS: A Denoising Diffusion Model for Text-to-Speech. In Proc. Interspeech 2021, pp. 3605-3609, 2021. +Ye Jia, Yu Zhang, Ron Weiss, Quan Wang, Jonathan Shen, Fei Ren, Zhifeng Chen, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, and Yonghui Wu. Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis. In Advances in Neural Information Processing Systems 31, pp. 4480-4490. Curran Associates, Inc., 2018. +Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, Nobukatsu Hojo, and Shogo Seki. VoiceGrad: Non-Parallel Any-to-Many Voice Conversion with Annealed Langevin Dynamics, 2020. +Kang-wook Kim, Seung-won Park, and Myun-chul Joe. Assem-VC: Realistic Voice Conversion by Assembling Modern Speech Synthesis Techniques, 2021. +Diederik P. Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational Diffusion Models, 2021. +Peter E. Kloeden and Eckhard Platen. Numerical Solution of Stochastic Differential Equations, volume 23 of Stochastic Modelling and Applied Probability. Springer-Verlag Berlin Heidelberg, 1992. + +Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, virtual, 2020. +Zhifeng Kong and Wei Ping. On Fast Sampling of Diffusion Probabilistic Models. In ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models, 2021. +Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. DiffWave: A Versatile Diffusion Model for Audio Synthesis. In International Conference on Learning Representations, 2021. +Sang-gil Lee, Heeseung Kim, Chaehun Shin, Xu Tan, Chang Liu, Qi Meng, Tao Qin, Wei Chen, Sungroh Yoon, and Tie-Yan Liu. PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Driven Adaptive Prior, 2021. +Yist Y. Lin, Chung-Ming Chien, Jheng-Hao Lin, Hung-yi Lee, and Lin-Shan Lee. FragmentVC: Any-To-Any Voice Conversion by End-To-End Extracting and Fusing Fine-Grained Voice Fragments with Attention. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5939-5943, 2021. +Robert S. Liptser and Albert N. Shiryaev. Statistics of Random Processes, volume 5 of Stochastic Modelling and Applied Probability. Springer-Verlag, 1978. +Songxiang Liu, Yuewen Cao, Dan Su, and Helen Meng. DiffSVC: A Diffusion Probabilistic Model for Singing Voice Conversion, 2021a. +Songxiang Liu, Yuewen Cao, Disong Wang, Xixin Wu, Xunying Liu, and Helen Meng. Any-toMany Voice Conversion With Location-Relative Sequence-to-Sequence Modeling. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:1717-1728, 2021b. +Manh Luong and Viet Anh Tran. Many-to-Many Voice Conversion Based Feature Disentanglement Using Variational Autoencoder. In Proc. Interspeech 2021, pp. 851-855, 2021. +Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. Montreal Forced Aligner: Trainable Text-Speech Alignment Using Kaldi. In Proc. Interspeech 2017, pp. 498-502, 2017. +Shahan Nercessian. Improved Zero-Shot Voice Conversion Using Explicit Conditioning Signals. In Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pp. 4711-4715. ISCA, 2020. +Alexander Quinn Nichol and Prafulla Dhariwal. Improved Denoising Diffusion Probabilistic Models. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 8162-8171. PMLR, 18-24 Jul 2021. +Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: An ASR Corpus Based on Public Domain Audio Books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206-5210, 2015. +Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnama Sadekova, and Mikhail Kudinov. Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 8599-8608. PMLR, 2021. +Kaizhi Qian, Yang Zhang, Shiyu Chang, Xuesong Yang, and Mark Hasegawa-Johnson. AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 5210-5219. PMLR, 09-15 Jun 2019. +Kaizhi Qian, Zeyu Jin, Mark Hasegawa-Johnson, and Gautham J. Mysore. F0-Consistent Many-ToMany Non-Parallel Voice Conversion Via Conditional Autoencoder. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6284-6288, 2020. + +Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015, pp. 234-241. Springer International Publishing, 2015. +Yuki Saito, Yusuke Ijima, Kyosuke Nishida, and Shinnosuke Takamichi. Non-Parallel Voice Conversion Using Variational Autoencoders Conditioned by Phonetic Posterioriograms and D-Vectors. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5274-5278, 2018. +Robin San-Roman, Eliya Nachmani, and Lior Wolf. Noise Estimation for Generative Diffusion Models, 2021. +Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising Diffusion Implicit Models. In International Conference on Learning Representations, 2021a. +Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum Likelihood Training of Score-Based Diffusion Models, 2021b. +Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-Based Generative Modeling through Stochastic Differential Equations. In International Conference on Learning Representations, 2021c. +Pascal Vincent. A Connection Between Score Matching and Denoising Autoencoders. Neural Computation, 23(7):1661-1674, 2011. +Disong Wang, Liquun Deng, Yu Ting Yeung, Xiao Chen, Xunying Liu, and Helen Meng. VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-Shot Voice Conversion. In Proc. Interspeech 2021, pp. 1344-1348, 2021. +Daniel Watson, Jonathan Ho, Mohammad Norouzi, and William Chan. Learning to Efficiently Sample from Diffusion Probabilistic Models, 2021. +Da-Yi Wu, Yen-Hao Chen, and Hung-yi Lee. VQVC+: One-Shot Voice Conversion by Vector Quantization and U-Net Architecture. In Helen Meng, Bo Xu, and Thomas Fang Zheng (eds.), Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020, pp. 4691-4695. ISCA, 2020. +Junichi Yamagishi, Christophe Veaux, and Kirsten MacDonald. CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit (version 0.92), 2019. +Heiga Zen, Rob Clark, Ron J. Weiss, Viet Dang, Ye Jia, Yonghui Wu, Yu Zhang, and Zhifeng Chen. LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech. In Interspeech, 2019. + +# A FORWARD VP SDE SOLUTION + +Since function $\gamma_{0,t}^{-1}X_{t}$ is linear in $X_{t}$ , taking its differential does not require second order derivative term in Ito's formula: + +$$ +\begin{array}{l} d \left(\gamma_ {0, t} ^ {- 1} X _ {t}\right) = d \left(e ^ {\frac {1}{2} \int_ {0} ^ {t} \beta_ {u} d u} X _ {t}\right) \\ = e ^ {\frac {1}{2} \int_ {0} ^ {t} \beta_ {u} d u} \cdot \frac {1}{2} \beta_ {t} X _ {t} d t + e ^ {\frac {1}{2} \int_ {0} ^ {t} \beta_ {u} d u} \cdot \left(- \frac {1}{2} \beta_ {t} X _ {t} d t + \sqrt {\beta_ {t}} d \overrightarrow {W _ {t}}\right) \tag {13} \\ = \sqrt {\beta_ {t}} e ^ {\frac {1}{2} \int_ {0} ^ {t} \beta_ {u} d u} d \overrightarrow {W _ {t}}. \\ \end{array} +$$ + +Integrating this expression from $s$ to $t$ results in an Itô's integral: + +$$ +e ^ {\frac {1}{2} \int_ {0} ^ {t} \beta_ {u} d u} X _ {t} - e ^ {\frac {1}{2} \int_ {0} ^ {s} \beta_ {u} d u} X _ {s} = \int_ {s} ^ {t} \sqrt {\beta_ {\tau}} e ^ {\frac {1}{2} \int_ {0} ^ {\tau} \beta_ {u} d u} d \overrightarrow {W _ {\tau}}, \tag {14} +$$ + +or + +$$ +X _ {t} = e ^ {- \frac {1}{2} \int_ {s} ^ {t} \beta_ {u} d u} X _ {s} + \int_ {s} ^ {t} \sqrt {\beta_ {\tau}} e ^ {- \frac {1}{2} \int_ {\tau} ^ {t} \beta_ {u} d u} d \overrightarrow {W _ {\tau} ^ {\prime}}. \tag {15} +$$ + +The integrand on the right-hand side is deterministic and belongs to $L_{2}[0,1]$ (for practical noise schedule choices), so its Itô's integral is a normal random variable, a martingale (meaning it has zero mean) and satisfies Itô's isometry which allows to calculate its variance: + +$$ +\operatorname {V a r} \left(X _ {t} \mid X _ {s}\right) = \int_ {s} ^ {t} \beta_ {\tau} e ^ {- \int_ {\tau} ^ {t} \beta_ {u} d u} I d \tau = \left(1 - e ^ {- \int_ {s} ^ {t} \beta_ {u} d u}\right) I. \tag {16} +$$ + +Thus + +$$ +\operatorname {L a w} \left(X _ {t} \mid X _ {s}\right) = \mathcal {N} \left(e ^ {- \frac {1}{2} \int_ {s} ^ {t} \beta_ {u} d u} X _ {s}, \left(1 - e ^ {- \int_ {s} ^ {t} \beta_ {u} d u}\right) I\right) = \mathcal {N} \left(\gamma_ {s, t} X _ {s}, \left(1 - \gamma_ {s, t} ^ {2}\right) I\right) \tag {17} +$$ + +# B THE OPTIMAL COEFFICIENTS ASYMPTOTICS + +First derive asymptotics for $\gamma$ : + +$$ +\gamma_ {t - h, t} = e ^ {- \frac {1}{2} \int_ {t - h} ^ {t} \beta_ {u} d u} = 1 - \frac {1}{2} \beta_ {t} h + \bar {o} (h), \tag {18} +$$ + +$$ +\gamma_ {0, t - h} ^ {2} = e ^ {- \int_ {0} ^ {t - h} \beta_ {u} d u} = e ^ {- \int_ {0} ^ {t} \beta_ {u} d u} e ^ {\int_ {t - h} ^ {t} \beta_ {u} d u} = \gamma_ {0, t} ^ {2} (1 + \beta_ {t} h) + \bar {o} (h), \tag {19} +$$ + +$$ +\gamma_ {0, t - h} = e ^ {- \frac {1}{2} \int_ {0} ^ {t - h} \beta_ {u} d u} = e ^ {- \frac {1}{2} \int_ {0} ^ {t} \beta_ {u} d u} e ^ {\frac {1}{2} \int_ {t - h} ^ {t} \beta_ {u} d u} = \gamma_ {0, t} \left(1 + \frac {1}{2} \beta_ {t} h\right) + \bar {o} (h), \tag {20} +$$ + +$$ +\gamma_ {t - h, t} ^ {2} = e ^ {- \int_ {t - h} ^ {t} \beta_ {u} d u} = 1 - \beta_ {t} h + \bar {o} (h). \tag {21} +$$ + +Then find asymptotics for $\mu, \nu$ and $\sigma^2$ : + +$$ +\mu_ {t - h, t} = \left(1 - \frac {1}{2} \beta_ {t} h + \bar {o} (h)\right) \frac {1 - \gamma_ {0 , t} ^ {2} - \gamma_ {0 , t} ^ {2} \beta_ {t} h + \bar {o} (h)}{1 - \gamma_ {0 , t} ^ {2}} = 1 - \frac {1}{2} \beta_ {t} h - \frac {\gamma_ {0 , t} ^ {2}}{1 - \gamma_ {0 , t} ^ {2}} \beta_ {t} h + \bar {o} (h), (2 2) +$$ + +$$ +\nu_ {t - h, t} = \left(\gamma_ {0, t} \left(1 + \frac {1}{2} \beta_ {t} h\right) + \bar {o} (h)\right) \frac {\beta_ {t} h + \bar {o} (h)}{1 - \gamma_ {0 , t} ^ {2}} = \frac {\gamma_ {0 , t}}{1 - \gamma_ {0 , t} ^ {2}} \beta_ {t} h + \bar {o} (h), \tag {23} +$$ + +$$ +\sigma_ {t - h, t} ^ {2} = \frac {1}{1 - \gamma_ {0 , t} ^ {2}} \left(\beta_ {t} h + \bar {o} (h)\right) \left(1 - \gamma_ {0, t} ^ {2} \left(1 + \beta_ {t} h\right) + \bar {o} (h)\right) = \beta_ {t} h + \bar {o} (h). \tag {24} +$$ + +Finally we get asymptotics for $\kappa^{*},\omega^{*}$ and $\sigma^{*}$ : + +$$ +\begin{array}{l} \kappa_ {t, h} ^ {*} = \frac {\nu_ {t - h , t} \left(1 - \gamma_ {0 , t} ^ {2}\right)}{\gamma_ {0 , t} \beta_ {t} h} - 1 = \frac {\gamma_ {0 , t - h} \left(1 - \gamma_ {t - h , t} ^ {2}\right)}{\gamma_ {0 , t} \beta_ {t} h} - 1 \tag {25} \\ = \frac {(\beta_ {t} h + \bar {o} (h)) ((1 + \frac {1}{2} \beta_ {t} h) \gamma_ {0 , t} + \bar {o} (h))}{\gamma_ {0 , t} \beta_ {t} h} - 1 = \bar {o} (1), \\ \end{array} +$$ + +$$ +\begin{array}{l} \beta_ {t} h \omega_ {t, h} ^ {*} = \mu_ {t - h, t} - 1 + \frac {\nu_ {t - h , t}}{\gamma_ {0 , t}} - \frac {1}{2} \beta_ {t} h = 1 - \frac {1}{2} \beta_ {t} h - \frac {\gamma_ {0 , t} ^ {2}}{1 - \gamma_ {0 , t} ^ {2}} \beta_ {t} h - 1 - \frac {1}{2} \beta_ {t} h + \frac {1}{\gamma_ {0 , t}} \times \tag {26} \\ \times \left(\frac {\gamma_ {0 , t}}{1 - \gamma_ {0 , t} ^ {2}} \beta_ {t} h + \bar {o} (h)\right) + \bar {o} (h) = \beta_ {t} h \left(- 1 - \frac {\gamma_ {0 , t} ^ {2}}{1 - \gamma_ {0 , t} ^ {2}} + \frac {1}{1 - \gamma_ {0 , t} ^ {2}}\right) + \bar {o} (h) = \bar {o} (h), \\ \end{array} +$$ + +$$ +\begin{array}{l} \left(\sigma_ {t, h} ^ {*}\right) ^ {2} = \sigma_ {t - h, t} ^ {2} + \nu_ {t - h, t} ^ {2} \mathbb {E} _ {X _ {t}} \left[ \operatorname {T r} \left(\operatorname {V a r} \left(X _ {0} \mid X _ {t}\right)\right) \right] / n = \beta_ {t} h + \bar {o} (h) \\ + \frac {\gamma_ {0 , t} ^ {2}}{\left(1 - \gamma_ {0 , t} ^ {2}\right) ^ {2}} \beta_ {t} ^ {2} h ^ {2} \mathbb {E} _ {X _ {t}} \left[ \operatorname {T r} \left(\operatorname {V a r} \left(X _ {0} \mid X _ {t}\right)\right) \right] / n = \beta_ {t} h (1 + \bar {o} (1)). \tag {27} \\ \end{array} +$$ + +# C PROOF OF THE THEOREM 1 + +The key fact necessary to prove the Theorem 1 is established in the following + +Lemma 1. Let $p_{0|t}(\cdot |x)$ be pdf of conditional distribution Law $(X_0|X_t = x)$ . Then for any $t \in [0,1]$ and $x \in \mathbb{R}^n$ + +$$ +s _ {\theta^ {*}} (x, t) = - \frac {1}{1 - \gamma_ {0 , t} ^ {2}} \left(x - \gamma_ {0, t} \mathbb {E} _ {p _ {0 | t} (\cdot | x)} X _ {0}\right). \tag {28} +$$ + +Proof of the Lemma 1. As mentioned in (Song et al., 2021c), an expression alternative to (8) can be derived for $\theta^{*}$ under mild assumptions on the data density (Hyvarinen, 2005; Vincent, 2011): + +$$ +\theta^ {*} = \arg \min _ {\theta} \int_ {0} ^ {1} \lambda_ {t} \mathbb {E} _ {X _ {0} \sim p _ {0} (\cdot)} \mathbb {E} _ {X _ {t} \sim p _ {t | 0} (\cdot | X _ {0})} \| s _ {\theta} (X _ {t}, t) - \nabla \log p _ {t | 0} (X _ {t} | X _ {0}) \| _ {2} ^ {2} d t, \tag {29} +$$ + +where $\operatorname{Law}(X_0)$ is data distribution with pdf $p_0(\cdot)$ and $\operatorname{Law}(X_t|X_0 = x_0)$ has pdf $p_{t|0}(\cdot|x_0)$ . By Bayes formula we can rewrite this in terms of pdfs $p_t(\cdot)$ and $p_{0|t}(\cdot|x_t)$ of distributions $\operatorname{Law}(X_t)$ and $\operatorname{Law}(X_0|X_t = x_t)$ correspondingly: + +$$ +\theta^ {*} = \arg \min _ {\theta} \int_ {0} ^ {1} \lambda_ {t} \mathbb {E} _ {X _ {t} \sim p _ {t} (\cdot)} \mathbb {E} _ {X _ {0} \sim p _ {0 | t} (\cdot | X _ {t})} \| s _ {\theta} (X _ {t}, t) - \nabla \log p _ {t | 0} (X _ {t} | X _ {0}) \| _ {2} ^ {2} d t. \tag {30} +$$ + +For any $n$ -dimensional random variable $\xi$ with finite second moment and deterministic vector $a$ we have + +$$ +\begin{array}{l} \mathbb {E} \| \xi - a \| _ {2} ^ {2} = \mathbb {E} \| \xi - \mathbb {E} \xi + \mathbb {E} \xi - a \| _ {2} ^ {2} = \mathbb {E} \| \xi - \mathbb {E} \xi \| _ {2} ^ {2} + 2 \langle \mathbb {E} [ \xi - \mathbb {E} \xi ], \mathbb {E} \xi - a \rangle \tag {31} \\ + \mathbb {E} \| \mathbb {E} \xi - a \| _ {2} ^ {2} = \mathbb {E} \| \xi - \mathbb {E} \xi \| _ {2} ^ {2} + \| \mathbb {E} \xi - a \| _ {2} ^ {2}. \\ \end{array} +$$ + +In our case $\xi = \nabla \log p_{t|0}(X_t|X_0)$ and $a = s_{\theta}(X_t,t)$ , so $\mathbb{E}\| \xi -\mathbb{E}\xi \| _2^2$ is independent of $\theta$ . Thus + +$$ +\theta^ {*} = \arg \min _ {\theta} \int_ {0} ^ {1} \lambda_ {t} \mathbb {E} _ {X _ {t} \sim p _ {t} (\cdot)} \| s _ {\theta} (X _ {t}, t) - \mathbb {E} _ {X _ {0} \sim p _ {0 | t} (\cdot | X _ {t})} \left[ \nabla \log p _ {t | 0} (X _ {t} | X _ {0}) \right] \| _ {2} ^ {2} d t. \tag {32} +$$ + +Therefore, the optimal score estimation network $s_{\theta^*}$ can be expressed as + +$$ +s _ {\theta^ {*}} (x, t) = \mathbb {E} _ {p _ {0 | t} (\cdot | x)} \left[ \nabla \log p _ {t | 0} (x | X _ {0}) \right] \tag {33} +$$ + +for all $t\in [0,1]$ and $x\in \operatorname {supp}\{p_t\} = \mathbb{R}^n$ + +As proven in Appendix A, $\operatorname{Law}(X_t|X_0)$ is Gaussian with mean vector $\gamma_{0,t}X_0$ and covariance matrix $(1 - \gamma_{0,t}^2)\mathbf{I}$ , so finally we obtain + +$$ +s _ {\theta^ {*}} (x, t) = \mathbb {E} _ {p _ {0 | t} (\cdot | x)} \left[ - \frac {1}{1 - \gamma_ {0 , t} ^ {2}} \left(x - \gamma_ {0, t} X _ {0}\right) \right] = - \frac {1}{1 - \gamma_ {0 , t} ^ {2}} \left(x - \gamma_ {0, t} \mathbb {E} _ {p _ {0 | t} (\cdot | x)} X _ {0}\right). \tag {34} +$$ + +![](images/62cde64057a78613ec763e7faa0981391a158375b3fb1f790fb2cde76d5e13b7.jpg) + +Now let us prove the Theorem 1. + +Proof of the Theorem 1. The sampling scheme (11) consists in adding Gaussian noise to a linear combination of $\hat{X}_t$ and $s_{\theta^*}(\hat{X}_t, t)$ . Combining (11) and the Lemma 1 we get + +$$ +\begin{array}{l} \hat {X} _ {t - h} = \hat {\sigma} _ {t, h} \xi_ {t} + \hat {X} _ {t} + \beta_ {t} h \left(\left(\frac {1}{2} + \hat {\omega} _ {t, h}\right) \hat {X} _ {t} + (1 + \hat {\kappa} _ {t, h}) s _ {\theta^ {*}} (\hat {X} _ {t}, t)\right) = \hat {\sigma} _ {t, h} \xi_ {t} \\ + \left(1 + \beta_ {t} h \left(\frac {1}{2} + \hat {\omega} _ {t, h}\right)\right) \hat {X} _ {t} + \beta_ {t} h (1 + \hat {\kappa} _ {t, h}) \left(- \frac {1}{1 - \gamma_ {0 , t} ^ {2}} \left(\hat {X} _ {t} - \gamma_ {0, t} \mathbb {E} _ {p _ {0 | t} (\cdot | \hat {X} _ {t})} X _ {0}\right)\right) \tag {35} \\ = \hat {\sigma} _ {t, h} \xi_ {t} + \left(1 + \beta_ {t} h \left(\frac {1}{2} + \hat {\omega} _ {t, h} - \frac {1 + \hat {\kappa} _ {t , h}}{1 - \gamma_ {0 , t} ^ {2}}\right)\right) \hat {X} _ {t} + \frac {\gamma_ {0 , t} \beta_ {t} h (1 + \hat {\kappa} _ {t , h})}{1 - \gamma_ {0 , t} ^ {2}} \mathbb {E} _ {p _ {0 | t} (\cdot | \hat {X} _ {t})} X _ {0}, \\ \end{array} +$$ + +where $\xi_{t}$ are i.i.d. random variables from standard normal distribution $\mathcal{N}(0,\mathrm{I})$ for $t = 1,1 - h,..,h$ . Thus, the distribution $\hat{X}_{t - h}|\hat{X}_t$ is also Gaussian: + +$$ +\operatorname {L a w} \left(\hat {X} _ {t - h} | \hat {X} _ {t}\right) = \mathcal {N} \left(\hat {\mu} _ {t, h} \left(\hat {\kappa} _ {t, h}, \hat {\omega} _ {t, h}\right) \hat {X} _ {t} + \hat {\nu} _ {t, h} \left(\hat {\kappa} _ {t, h}\right) \mathbb {E} _ {p _ {0 | t} (\cdot | \hat {X} _ {t})} X _ {0}, \hat {\sigma} _ {t, h} ^ {2} \mathrm {I}\right), \tag {36} +$$ + +$$ +\hat {\mu} _ {t, h} \left(\hat {\kappa} _ {t, h}, \hat {\omega} _ {t, h}\right) = 1 + \beta_ {t} h \left(\frac {1}{2} + \hat {\omega} _ {t, h} - \frac {1 + \hat {\kappa} _ {t , h}}{1 - \gamma_ {0 , t} ^ {2}}\right), \tag {37} +$$ + +$$ +\hat {\nu} _ {t, h} \left(\hat {\kappa} _ {t, h}\right) = \frac {\gamma_ {0 , t} \beta_ {t} h \left(1 + \hat {\kappa} _ {t , h}\right)}{1 - \gamma_ {0 , t} ^ {2}}, \tag {38} +$$ + +which leads to the following formula for the transition densities of the reverse diffusion: + +$$ +\hat {p} _ {t - h \mid t} \left(x _ {t - h} \mid x _ {t}\right) = \frac {1}{\sqrt {2 \pi} \hat {\sigma} _ {t , h} ^ {n}} \exp \left(- \frac {\| x _ {t - h} - \hat {\mu} _ {t , h} x _ {t} - \hat {\nu} _ {t , h} \mathbb {E} _ {p _ {0 \mid t} (\cdot | x _ {t})} X _ {0} \| _ {2} ^ {2}}{2 \hat {\sigma} _ {t , h} ^ {2}}\right). \tag {39} +$$ + +Moreover, comparing $\hat{\mu}_{t,h}$ and $\hat{\nu}_{t,h}$ with $\mu_{t - h,t}$ and $\nu_{t - h,t}$ defined in (9) we deduce that + +$$ +\hat {\nu} _ {t, h} = \nu_ {t - h, t} \Leftrightarrow \frac {\gamma_ {0 , t} \beta_ {t} h \left(1 + \hat {\kappa} _ {t , h}\right)}{1 - \gamma_ {0 , t} ^ {2}} = \nu_ {t - h, t} \Leftrightarrow \hat {\kappa} _ {t, h} = \kappa_ {t, h} ^ {*}. \tag {40} +$$ + +If we also want $\hat{\mu}_{t,h} = \mu_{t - h,t}$ to be satisfied, then we should have + +$$ +1 + \beta_ {t} h \left(\frac {1}{2} + \hat {\omega} _ {t, h} - \frac {1 + \kappa_ {t , h} ^ {*}}{1 - \gamma_ {0 , t} ^ {2}}\right) = \mu_ {t - h, t} \Leftrightarrow \left(\frac {\mu_ {t - h , t} - 1}{\beta_ {t} h} - \omega_ {t, h} ^ {*} + \hat {\omega} _ {t, h}\right) \beta_ {t} h + 1 = \mu_ {t - h, t}, \tag {41} +$$ + +i.e. $\hat{\nu}_{t,h} = \nu_{t - h,t}$ and $\hat{\mu}_{t,h} = \mu_{t - h,t}$ iff $\hat{\kappa}_{t,h} = \kappa_{t,h}^{*}$ and $\hat{\omega}_{t,h} = \omega_{t,h}^{*}$ for the parameters $\kappa_{t,h}^{*}$ and $\omega_{t,h}^{*}$ defined in (10). + +As for the corresponding densities of the forward process $X$ , they are Gaussian when conditioned on the initial data point $X_0$ : + +$$ +\operatorname {L a w} \left(X _ {t - h} | X _ {t}, X _ {0}\right) = \mathcal {N} \left(\mu_ {t - h, t} X _ {t} + \nu_ {t - h, t} X _ {0}, \sigma_ {t - h, t} ^ {2} \mathrm {I}\right), \tag {42} +$$ + +where coefficients $\mu_{t - h,t},\nu_{t - h,t}$ and $\sigma_{t - h,t}$ are defined in (9). This formula for Law $(X_{t - h}|X_t,X_0)$ follows from the general fact about Gaussian distributions appearing in many recent works on diffusion probabilistic modeling (Kingma et al., 2021): if $Z_{t}|Z_{s}\sim \mathcal{N}(\alpha_{t|s}Z_{s},\sigma_{t|s}^{2}\mathrm{I})$ and $Z_{t}|Z_{0}\sim \mathcal{N}(\alpha_{t|0}Z_{0},\sigma_{t|0}^{2}\mathrm{I})$ for $0 < s < t$ , then + +$$ +\operatorname {L a w} \left(Z _ {s} \mid Z _ {t}, Z _ {0}\right) = \mathcal {N} \left(\frac {\sigma_ {s \mid 0} ^ {2}}{\sigma_ {t \mid 0} ^ {2}} \alpha_ {t \mid s} Z _ {t} + \frac {\sigma_ {t \mid s} ^ {2}}{\sigma_ {t \mid 0} ^ {2}} \alpha_ {s \mid 0} Z _ {0}, \frac {\sigma_ {s \mid 0} ^ {2} \sigma_ {t \mid s} ^ {2}}{\sigma_ {t \mid 0} ^ {2}} I\right). \tag {43} +$$ + +This fact is a result of applying Bayes formula to normal distributions. In our case $\alpha_{t|s} = \gamma_{s,t}$ and $\sigma_{t|s}^2 = 1 - \gamma_{s,t}^2$ . + +To get an expression for the densities $p_{t - h|t}(x_{t - h}|x_t)$ similar to (39), we need to integrate out the dependency on data $X_0$ from Gaussian distribution Law $(X_{t - h}|X_t,X_0)$ : + +$$ +\begin{array}{l} p _ {t - h \mid t} \left(x _ {t - h} \mid x _ {t}\right) = \int p _ {t - h, 0 \mid t} \left(x _ {t - h}, x _ {0} \mid x _ {t}\right) d x _ {0} = \int p _ {t - h \mid t, 0} \left(x _ {t - h} \mid x _ {t}, x _ {0}\right) p _ {0 \mid t} \left(x _ {0} \mid x _ {t}\right) d x _ {0} \tag {44} \\ = \mathbb {E} _ {X _ {0} \sim p _ {0 | t} (\cdot | x _ {t})} \left[ p _ {t - h | t, 0} \left(x _ {t - h} \mid x _ {t}, X _ {0}\right) \right], \\ \end{array} +$$ + +which implies the following formula: + +$$ +p _ {t - h \mid t} (x _ {t - h} | x _ {t}) = \frac {1}{\sqrt {2 \pi} \sigma_ {t - h , t} ^ {n}} \mathbb {E} _ {p _ {0 \mid t} (\cdot | x _ {t})} \left[ \exp \left(- \frac {\| x _ {t - h} - \mu_ {t - h , t} x _ {t} - \nu_ {t - h , t} X _ {0} \| _ {2} ^ {2}}{2 \sigma_ {t - h , t} ^ {2}}\right) \right]. \tag {45} +$$ + +Note that in contrast with the transition densities (39) of the reverse process $\hat{X}$ , the corresponding densities (45) of the forward process $X$ are not normal in general. + +Our goal is to find parameters $\hat{\kappa}, \hat{\omega}$ and $\hat{\sigma}$ that maximize log-likelihood of sample paths $X$ under probability measure with transition densities $\hat{p}$ . Put $t_k = kh$ for $k = 0,1,\dots,N$ and write down this log-likelihood: + +$$ +\begin{array}{l} \int p \left(x _ {1}, x _ {1 - h},.., x _ {0}\right) \left(\sum_ {k = 0} ^ {N - 1} \log \hat {p} _ {t _ {k} | t _ {k + 1}} \left(x _ {t _ {k}} \mid x _ {t _ {k + 1}}\right) + \log \hat {p} _ {1} \left(x _ {1}\right)\right) d x _ {1} d x _ {1 - h}. d x _ {0} \tag {46} \\ = \sum_ {k = 0} ^ {N - 1} \int p \left(x _ {t _ {k}}, x _ {t _ {k + 1}}\right) \log \hat {p} _ {t _ {k} | t _ {k + 1}} \left(x _ {t _ {k}} \mid x _ {t _ {k + 1}}\right) d x _ {t _ {k + 1}} d x _ {t _ {k}} + \int p \left(x _ {1}\right) \log \hat {p} _ {1} \left(x _ {1}\right) d x _ {1}. \\ \end{array} +$$ + +The last term does not depend on $\hat{\kappa}, \hat{\omega}$ and $\hat{\sigma}$ , so we can ignore it. Let $R_{k}$ be the $k$ -th term in the sum above. Since we are free to have different coefficients $\hat{\kappa}_{t,h}, \hat{\omega}_{t,h}$ and $\hat{\sigma}_{t,h}$ for different steps, we can maximize each $R_{k}$ separately. Terms $R_{k}$ can be expressed as + +$$ +\begin{array}{l} R _ {k} = \int p (x _ {t _ {k}}, x _ {t _ {k + 1}}) \log \hat {p} _ {t _ {k} | t _ {k + 1}} (x _ {t _ {k}} | x _ {t _ {k + 1}}) d x _ {t _ {k + 1}} d x _ {t _ {k}} \\ = \int p \left(x _ {t _ {k + 1}}\right) p _ {t _ {k} \mid t _ {k + 1}} \left(x _ {t _ {k}} \mid x _ {t _ {k + 1}}\right) \log \hat {p} _ {t _ {k} \mid t _ {k + 1}} \left(x _ {t _ {k}} \mid x _ {t _ {k + 1}}\right) d x _ {t _ {k + 1}} d x _ {t _ {k}} \tag {47} \\ = \mathbb {E} _ {X _ {t _ {k + 1}}} \left[ \int p _ {t _ {k} | t _ {k + 1}} (x _ {t _ {k}} | X _ {t _ {k + 1}}) \log \hat {p} _ {t _ {k} | t _ {k + 1}} (x _ {t _ {k}} | X _ {t _ {k + 1}}) d x _ {t _ {k}} \right]. \\ \end{array} +$$ + +From now on we will skip subscripts of $\mu, \nu, \sigma, \hat{\mu}, \hat{\nu}, \hat{\sigma}, \hat{\kappa}, \hat{\omega}, \kappa^{*}$ and $\omega^{*}$ for brevity. Denote + +$$ +Q \left(x _ {t _ {k}}, X _ {t _ {k + 1}}, X _ {0}\right) = \frac {1}{\sqrt {2 \pi} \sigma^ {n}} \exp \left(- \frac {\left\| x _ {t _ {k}} - \mu X _ {t _ {k + 1}} - \nu X _ {0} \right\| _ {2} ^ {2}}{2 \sigma^ {2}}\right) \log \hat {p} _ {t _ {k} | t _ {k + 1}} \left(x _ {t _ {k}} \mid X _ {t _ {k + 1}}\right). \tag {48} +$$ + +Using the formula (44) for the densities of $X$ together with the explicit expression for the Gaussian density $p_{t_k|t_{k+1},0}(x_{t_k}|X_{t_{k+1}},X_0)$ and applying Fubini's theorem to change the order of integration, we rewrite $R_k$ as + +$$ +\begin{array}{l} R _ {k} = \mathbb {E} _ {X _ {t _ {k + 1}}} \left[ \int p _ {t _ {k} | t _ {k + 1}} (x _ {t _ {k}} | X _ {t _ {k + 1}}) \log \hat {p} _ {t _ {k} | t _ {k + 1}} (x _ {t _ {k}} | X _ {t _ {k + 1}}) d x _ {t _ {k}} \right] \\ = \mathbb {E} _ {X _ {t _ {k + 1}}} \left[ \int \mathbb {E} _ {X _ {0} \sim p _ {0 | t _ {k + 1}} (\cdot | X _ {t _ {k + 1}})} \left[ p _ {t _ {k} | t _ {k + 1}, 0} (x _ {t _ {k}} | X _ {t _ {k + 1}}, X _ {0}) \log \hat {p} _ {t _ {k} | t _ {k + 1}} (x _ {t _ {k}} | X _ {t _ {k + 1}}) \right] d x _ {t _ {k}} \right] \\ = \mathbb {E} _ {X _ {t _ {k + 1}}} \left[ \int \mathbb {E} _ {X _ {0} \sim p _ {0 | t _ {k + 1}} (\cdot | X _ {t _ {k + 1}})} [ Q (x _ {t _ {k}}, X _ {t _ {k + 1}}, X _ {0}) ] d x _ {t _ {k}} \right] \\ = \mathbb {E} _ {X _ {t _ {k + 1}}} \mathbb {E} _ {X _ {0} \sim p _ {0 | t _ {k + 1}} (\cdot | X _ {t _ {k + 1}})} \left[ \int Q \left(x _ {t _ {k}}, X _ {t _ {k + 1}}, X _ {0}\right) d x _ {t _ {k}} \right]. \tag {49} \\ \end{array} +$$ + +The formula (48) implies that the integral of $Q(x_{t_k}, X_{t_{k+1}}, X_0)$ with respect to $x_{t_k}$ can be seen as expectation of $\log \hat{p}_{t_k|t_{k+1}}(\xi | X_{t_{k+1}})$ with respect to normal random variable $\xi$ with mean $\mu X_{t_{k+1}} + \nu X_0$ and covariance matrix $\sigma^2\mathrm{I}$ . Plugging in the expression (39) into (48), we can calculate this integral: + +$$ +\begin{array}{l} \mathbb {E} _ {\xi} \left[ - \log \sqrt {2 \pi} - n \log \hat {\sigma} - \frac {\| \xi - \hat {\mu} X _ {t _ {k + 1}} - \hat {\nu} \mathbb {E} _ {X _ {0} ^ {\prime} \sim p _ {0 | t _ {k + 1}} (\cdot | X _ {t _ {k + 1}})} X _ {0} ^ {\prime} \| _ {2} ^ {2}}{2 \hat {\sigma} ^ {2}} \right] \tag {50} \\ = - \log \sqrt {2 \pi} - n \log \hat {\sigma} - \frac {\mathbb {E} _ {\xi} \| \xi - \hat {\mu} X _ {t _ {k + 1}} - \hat {\nu} \mathbb {E} _ {p _ {0 | t _ {k + 1}} (\cdot | X _ {t _ {k + 1}})} X _ {0} ^ {\prime} \| _ {2} ^ {2}}{2 \hat {\sigma} ^ {2}}. \\ \end{array} +$$ + +Thus, terms $R_{k}$ we want to maximize equal + +$$ +R _ {k} = - \log \sqrt {2 \pi} - n \log \hat {\sigma} - \mathbb {E} _ {X _ {t _ {k + 1}}} \mathbb {E} _ {X _ {0} \sim p _ {0 | t _ {k + 1}} (\cdot | X _ {t _ {k + 1}})} \frac {\mathbb {E} _ {\xi} \| \xi - \hat {\mu} X _ {t _ {k + 1}} - \hat {\nu} \mathbb {E} _ {p _ {0 | t _ {k + 1}} (\cdot | X _ {t _ {k + 1}})} X _ {0} ^ {\prime} \| _ {2} ^ {2}}{2 \hat {\sigma} ^ {2}} \tag {51} +$$ + +Maximizing $R_{k}$ with respect to $(\hat{\kappa},\hat{\omega},\hat{\sigma})$ is equivalent to minimizing $\mathbb{E}_{X_{t_{k + 1}}}S_k$ where $S_{k}$ is given by + +$$ +S _ {k} = n \log \hat {\sigma} + \frac {1}{2 \hat {\sigma} ^ {2}} \mathbb {E} _ {X _ {0} \sim p _ {0 | t _ {k + 1}} (\cdot | X _ {t _ {k + 1}})} \mathbb {E} _ {\xi} \| \xi - \hat {\mu} X _ {t _ {k + 1}} - \hat {\nu} \mathbb {E} _ {p _ {0 | t _ {k + 1}} (\cdot | X _ {t _ {k + 1}})} X _ {0} ^ {\prime} \| _ {2} ^ {2}, \tag {52} +$$ + +where the expectation with respect to $\xi \sim \mathcal{N}(\mu X_{t_{k + 1}} + \nu X_0,\sigma^2\mathrm{I})$ can be calculated using the fact that for every vector $\hat{a}$ we can express $\mathbb{E}_{\xi}\| \xi -\hat{a}\| _2^2$ as + +$$ +\mathbb {E} \| \xi - \mathbb {E} \xi + \mathbb {E} \xi - \hat {a} \| _ {2} ^ {2} = \mathbb {E} \| \xi - \mathbb {E} \xi \| _ {2} ^ {2} + 2 \langle \mathbb {E} [ \xi - \mathbb {E} \xi ], \mathbb {E} \xi - \hat {a} \rangle + \mathbb {E} \| \mathbb {E} \xi - \hat {a} \| _ {2} ^ {2} = n \sigma^ {2} + \| \mathbb {E} \xi - \hat {a} \| _ {2} ^ {2}. (5 3) +$$ + +So, the outer expectation with respect to $\mathrm{Law}\big(X_0|X_{t_{k + 1}}\big)$ in (52) can be simplified: + +$$ +\begin{array}{l} \mathbb {E} _ {X _ {0} \sim p _ {0 | t _ {k + 1}} (\cdot | X _ {t _ {k + 1}})} \left[ n \sigma^ {2} + \| (\mu - \hat {\mu}) X _ {t _ {k + 1}} + \nu X _ {0} - \hat {\nu} \mathbb {E} _ {X _ {0} ^ {\prime} \sim p _ {0 | t _ {k + 1}} (\cdot | X _ {t _ {k + 1}})} X _ {0} ^ {\prime} \| _ {2} ^ {2} \right] \\ = n \sigma^ {2} + \mathbb {E} _ {X _ {0}} \| ((\mu - \hat {\mu}) X _ {t _ {k + 1}} + \nu X _ {0} - \hat {\nu} \mathbb {E} _ {X _ {0} ^ {\prime}} X _ {0} ^ {\prime} \| _ {2} ^ {2} = n \sigma^ {2} + (\mu - \hat {\mu}) ^ {2} \| X _ {t _ {k + 1}} \| _ {2} ^ {2} \\ + 2 \left\langle (\mu - \hat {\mu}) X _ {t _ {k + 1}}, (\nu - \hat {\nu}) \mathbb {E} _ {X _ {0}} X _ {0} \right\rangle + \mathbb {E} _ {X _ {0}} \| \nu X _ {0} - \hat {\nu} \mathbb {E} _ {X _ {0} ^ {\prime}} X _ {0} ^ {\prime} \| _ {2} ^ {2} = (\mu - \hat {\mu}) ^ {2} \| X _ {t _ {k + 1}} \| _ {2} ^ {2} \tag {54} \\ + 2 \langle (\mu - \hat {\mu}) X _ {t _ {k + 1}}, (\nu - \hat {\nu}) \mathbb {E} _ {X _ {0}} X _ {0} \rangle + \nu^ {2} \mathbb {E} _ {X _ {0}} \| X _ {0} \| _ {2} ^ {2} + \hat {\nu} ^ {2} \| \mathbb {E} _ {X _ {0}} X _ {0} \| _ {2} ^ {2} + n \sigma^ {2} \\ - 2 \nu \hat {\nu} \langle \mathbb {E} _ {X _ {0}} X _ {0}, \mathbb {E} _ {X _ {0} ^ {\prime}} X _ {0} ^ {\prime} \rangle = \| (\mu - \hat {\mu}) X _ {t _ {k + 1}} + (\nu - \hat {\nu}) \mathbb {E} _ {X _ {0}} X _ {0}) \| _ {2} ^ {2} + \nu^ {2} \mathbb {E} _ {X _ {0}} \| X _ {0} \| _ {2} ^ {2} \\ - \nu^ {2} \| \mathbb {E} _ {X _ {0}} X _ {0} \| _ {2} ^ {2} + n \sigma^ {2}, \\ \end{array} +$$ + +where all the expectations in the formula above are taken with respect to the conditional data distribution $\mathrm{Law}(X_0|X_{t_{k + 1}})$ . So, the resulting expression for the terms $S_{k}$ whose expectation with respect to $\mathrm{Law}(X_{t_{k + 1}})$ we want to minimize is + +$$ +\begin{array}{l} S _ {k} = n \log \hat {\sigma} + \frac {1}{2 \hat {\sigma} ^ {2}} \left(n \sigma^ {2} + \| (\mu - \hat {\mu}) X _ {t _ {k + 1}} + (\nu - \hat {\nu}) \mathbb {E} [ X _ {0} | X _ {t _ {k + 1}} ] \| _ {2} ^ {2} \right. \tag {55} \\ \left. + \nu^ {2} \left(\mathbb {E} \left[ \| X _ {0} \| _ {2} ^ {2} | X _ {t _ {k + 1}} \right] - \| \mathbb {E} [ X _ {0} | X _ {t _ {k + 1}} ] \| _ {2} ^ {2}\right)\right). \\ \end{array} +$$ + +Now it is clear that $\kappa_{t_{k + 1},h}^{*}$ and $\omega_{t_{k + 1},h}^{*}$ are optimal because $\hat{\mu}_{t_{k + 1},h}(\kappa_{t_{k + 1},h}^{*},\omega_{t_{k + 1},h}^{*}) = \mu_{t_k,t_{k + 1}}$ and $\hat{\nu}_{t_{k + 1},h}(\kappa_{t_{k + 1},h}^{*}) = \nu_{t_k,t_{k + 1}}$ . For this choice of parameters we have + +$$ +\mathbb {E} _ {X _ {t _ {k + 1}}} S _ {k} = n \log \hat {\sigma} + \frac {1}{2 \hat {\sigma} ^ {2}} \left(n \sigma^ {2} + \nu^ {2} \mathbb {E} _ {X _ {t _ {k + 1}}} \left[ \mathbb {E} \left[ \| X _ {0} \| _ {2} ^ {2} | X _ {t _ {k + 1}} \right] - \| \mathbb {E} [ X _ {0} | X _ {t _ {k + 1}} ] \| _ {2} ^ {2} \right]\right). \tag {56} +$$ + +Note that $\mathbb{E}\left[\| X_0\| _2^2 |X_{t_{k + 1}}\right] - \| \mathbb{E}[X_0|X_{t_{k + 1}}]\| _2^2 = \mathrm{Tr}\left(\mathrm{Var}\left(X_0|X_{t_{k + 1}}\right)\right)$ is the overall variance of $\operatorname {Law}\left(X_0|X_{t_{k + 1}}\right)$ along all $n$ dimensions. Differentiating $\mathbb{E}_{X_{t_{k + 1}}}S_k$ with respect to $\hat{\sigma}$ shows that the optimal $\sigma_{t_{k + 1},h}^{*}$ should satisfy + +$$ +\left. \frac {n}{\sigma_ {t _ {k + 1} , h} ^ {*}} - \frac {1}{\left(\sigma_ {t _ {k + 1} , h} ^ {*}\right) ^ {3}} \left(n \sigma_ {t _ {k}, t _ {k + 1}} ^ {2} + \nu_ {t _ {k}, t _ {k + 1}} ^ {2} \mathbb {E} _ {X _ {t _ {k + 1}}} \left[ \operatorname {T r} \left(\operatorname {V a r} \left(X _ {0} \mid X _ {t _ {k + 1}}\right)\right) \right]\right) = 0, \right. \tag {57} +$$ + +which is indeed satisfied by the parameters $\sigma_{t,h}^{*}$ defined in (10). Thus, the statement (i) is proven. + +When it comes to proving that $\hat{X}$ is exact, we have to show that $\operatorname{Law}(\hat{X}_{t_k}) = \operatorname{Law}(X_{t_k})$ for every $k = 0,1,\dots,N$ . By the assumption that $\operatorname{Law}(\hat{X}_1) = \operatorname{Law}(X_1)$ it is sufficient to prove that $\hat{p}_{t_k|t_{k+1}}(x_{t_k}|x_{t_{k+1}}) \equiv p_{t_k|t_{k+1}}(x_{t_k}|x_{t_{k+1}})$ since the exactness will follow from this fact by mathematical induction. If $X_0$ is a constant random variable, $\operatorname{Law}(X_0|X_t) = \operatorname{Law}(X_0)$ also corresponds to the same constant, so $\operatorname{Var}(X_0|X_t) = 0$ meaning that $\sigma_{t,h}^* = \sigma_{t-h,t}$ , and the formulae (39) and (45) imply the desired result. + +Let us now consider the second case when $X_0 \sim \mathcal{N}(\bar{\mu}, \delta^2\mathrm{I})$ . It is a matter of simple but lengthy computations to prove another property of Gaussian distributions similar to (43): if $Z_0 \sim \mathcal{N}(\bar{\mu}, \delta^2\mathrm{I})$ and $Z_t|Z_0 \sim \mathcal{N}(a_tZ_0, b_t^2\mathrm{I})$ , then $Z_0|Z_t \sim \mathcal{N}\left(\frac{b_t^2}{b_t^2 + \delta^2a_t^2}\bar{\mu} + \frac{\delta^2a_t}{b_t^2 + \delta^2a_t^2}Z_t, \frac{\delta^2b_t^2}{b_t^2 + \delta^2a_t^2}\mathrm{I}\right)$ and $Z_t \sim \mathcal{N}(\bar{\mu}a_t, (b_t^2 + \delta^2a_t^2)\mathrm{I})$ . In our case $a_t = \gamma_{0,t}$ and $b_t^2 = 1 - \gamma_{0,t}^2$ , therefore + +$$ +\operatorname {L a w} \left(X _ {0} | X _ {t}\right) = \mathcal {N} \left(\frac {1 - \gamma_ {0 , t} ^ {2}}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}} \bar {\mu} + \frac {\delta^ {2} \gamma_ {0 , t}}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}} X _ {t}, \frac {\delta^ {2} (1 - \gamma_ {0 , t} ^ {2})}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}} \mathrm {I}\right). \tag {58} +$$ + +So, $\operatorname{Var}(X_0|X_t)$ does not depend on $X_{t}$ and + +$$ +(\sigma_ {t, h} ^ {*}) ^ {2} = \sigma_ {t - h, t} ^ {2} + \frac {\nu_ {t - h , t} ^ {2}}{n} \mathbb {E} _ {X _ {t}} [ \operatorname {T r} (\operatorname {V a r} (X _ {0} | X _ {t})) ] = \sigma_ {t - h, t} ^ {2} + \nu_ {t - h, t} ^ {2} \frac {\delta^ {2} (1 - \gamma_ {0 , t} ^ {2})}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}}. \tag {59} +$$ + +Since $\operatorname{Law}(X_t|X_{t - h})$ , $\operatorname{Law}(X_{t - h})$ and $\operatorname{Law}(X_t)$ are Gaussian, Bayes formula implies that $\operatorname{Law}(X_{t - h}|X_t)$ is Gaussian as well with the following mean and covariance matrix: + +$$ +\mathbb {E} [ X _ {t - h} | X _ {t} ] = \frac {\gamma_ {0 , t - h} (1 - \gamma_ {t - h , t} ^ {2})}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}} \bar {\mu} + \frac {\gamma_ {t - h , t} (1 - \gamma_ {0 , t - h} ^ {2} + \delta^ {2} \gamma_ {0 , t - h} ^ {2})}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}} X _ {t}, \tag {60} +$$ + +$$ +\operatorname {V a r} \left(X _ {t - h} \mid X _ {t}\right) = \frac {\left(1 - \gamma_ {t - h , t} ^ {2}\right) \left(1 - \gamma_ {0 , t - h} ^ {2} + \delta^ {2} \gamma_ {0 , t - h} ^ {2}\right)}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}} \mathrm {I}. \tag {61} +$$ + +The distribution $\operatorname{Law}(\hat{X}_{t - h}|\hat{X}_t)$ is also Gaussian by the formula (39), so to conclude the proof we just need to show that $\mathbb{E}[\hat{X}_{t - h}|\hat{X}_t = x] = \mathbb{E}[X_{t - h}|X_t = x]$ and $\operatorname{Var}(\hat{X}_{t - h}|\hat{X}_t = x) = \operatorname{Var}(X_{t - h}|X_t = x)$ for every $x\in \mathbb{R}^n$ for the optimal parameters (10). Recall that for $\kappa_{t,h}^{*}$ and $\omega_{t,h}^{*}$ we have $\hat{\mu}_{t,h}(\kappa_{t,h}^{*},\omega_{t,h}^{*}) = \mu_{t - h,t}$ and $\hat{\nu}_{t,h}(\kappa_{t,h}^{*}) = \nu_{t - h,t}$ . Utilizing the formulae (9), (39), (58) and the fact that $\gamma_{0,t - h}\cdot \gamma_{t - h,t} = \gamma_{0,t}$ (following from the definition of $\gamma$ in (7)) we conclude that + +$$ +\begin{array}{l} \mathbb {E} [ \hat {X} _ {t - h} | \hat {X} _ {t} = x ] = \hat {\mu} _ {t, h} (\kappa_ {t, h} ^ {*}, \omega_ {t, h} ^ {*}) x + \hat {\nu} _ {t, h} (\kappa_ {t, h} ^ {*}) \mathbb {E} _ {p _ {0 | t} (\cdot | x)} X _ {0} \\ = \gamma_ {t - h, t} \frac {1 - \gamma_ {0 , t - h} ^ {2}}{1 - \gamma_ {0 , t} ^ {2}} x + \gamma_ {0, t - h} \frac {1 - \gamma_ {t - h , t} ^ {2}}{1 - \gamma_ {0 , t} ^ {2}} \left[ \frac {1 - \gamma_ {0 , t} ^ {2}}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}} \bar {\mu} + \frac {\delta^ {2} \gamma_ {0 , t}}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}} x \right] \\ = \gamma_ {0, t - h} \frac {1 - \gamma_ {t - h , t} ^ {2}}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}} \bar {\mu} + \gamma_ {t - h, t} \frac {(1 - \gamma_ {0 , t - h} ^ {2}) (1 - \gamma_ {0 , t} ^ {2}) + \delta^ {2} \gamma_ {0 , t} ^ {2} (1 - \gamma_ {0 , t - h} ^ {2})}{(1 - \gamma_ {0 , t} ^ {2}) (1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2})} x \\ + \gamma_ {t - h, t} \frac {\delta^ {2} \gamma_ {0 , t - h} ^ {2} \left(1 - \gamma_ {t - h , t} ^ {2}\right)}{\left(1 - \gamma_ {0 , t} ^ {2}\right) \left(1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}\right)} x = \gamma_ {0, t - h} \frac {1 - \gamma_ {t - h , t} ^ {2}}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}} \bar {\mu} \tag {62} \\ + \gamma_ {t - h, t} \frac {(1 - \gamma_ {0 , t - h} ^ {2}) (1 - \gamma_ {0 , t} ^ {2}) + \delta^ {2} \gamma_ {0 , t - h} ^ {2} (1 - \gamma_ {0 , t} ^ {2})}{(1 - \gamma_ {0 , t} ^ {2}) (1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2})} x = \gamma_ {0, t - h} \frac {1 - \gamma_ {t - h , t} ^ {2}}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}} \bar {\mu} \\ + \gamma_ {t - h, t} \frac {1 - \gamma_ {0 , t - h} ^ {2} + \delta^ {2} \gamma_ {0 , t - h} ^ {2}}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}} x = \mathbb {E} [ X _ {t - h} | X _ {t} = x ], \\ \end{array} +$$ + +$$ +\begin{array}{l} \mathrm {V a r} (\hat {X} _ {t - h} | \hat {X} _ {t} = x) = (\sigma_ {t, h} ^ {*}) ^ {2} \mathrm {I} = \left(\sigma_ {t - h, t} ^ {2} + \nu_ {t - h, t} ^ {2} \frac {\delta^ {2} (1 - \gamma_ {0 , t} ^ {2})}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}}\right) \mathrm {I} \\ = \left(\frac {(1 - \gamma_ {0 , t - h} ^ {2}) (1 - \gamma_ {t - h , t} ^ {2})}{1 - \gamma_ {0 , t} ^ {2}} + \gamma_ {0, t - h} ^ {2} \frac {\delta^ {2} (1 - \gamma_ {t - h , t} ^ {2}) ^ {2}}{(1 - \gamma_ {0 , t} ^ {2}) (1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2})}\right) I \\ = \frac {1 - \gamma_ {t - h , t} ^ {2}}{\left(1 - \gamma_ {0 , t} ^ {2}\right) \left(1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}\right)} \left(\left(1 - \gamma_ {0, t - h} ^ {2}\right) \left(1 - \gamma_ {0, t} ^ {2}\right) + \delta^ {2} \gamma_ {0, t} ^ {2} \left(1 - \gamma_ {0, t - h} ^ {2}\right)\right) I \tag {63} \\ + \frac {1 - \gamma_ {t - h , t} ^ {2}}{(1 - \gamma_ {0 , t} ^ {2}) (1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2})} \left(\delta^ {2} \gamma_ {0, t - h} ^ {2} (1 - \gamma_ {t - h, t} ^ {2})\right) \mathrm {I} \\ = \frac {1 - \gamma_ {t - h , t} ^ {2}}{(1 - \gamma_ {0 , t} ^ {2}) (1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2})} \left((1 - \gamma_ {0, t - h} ^ {2}) (1 - \gamma_ {0, t} ^ {2}) + \delta^ {2} \gamma_ {0, t - h} ^ {2} (1 - \gamma_ {0, t} ^ {2})\right) I \\ = \frac {(1 - \gamma_ {t - h , t} ^ {2}) (1 - \gamma_ {0 , t - h} ^ {2} + \delta^ {2} \gamma_ {0 , t - h} ^ {2})}{1 - \gamma_ {0 , t} ^ {2} + \delta^ {2} \gamma_ {0 , t} ^ {2}} \mathrm {I} = \mathrm {V a r} \left(X _ {t - h} | X _ {t} = x\right). \\ \end{array} +$$ + +![](images/a90f48bc176c112a94d071752c5eb4e1412762536aded1fff7f195ec9d216e78.jpg) + +# D REVERSE MR-VP SDE SOLVER + +MR-VP DPM is characterized by the following forward and reverse diffusions: + +$$ +d X _ {t} = \frac {1}{2} \beta_ {t} (\bar {X} - X _ {t}) d t + \sqrt {\beta_ {t}} d \overrightarrow {W _ {t}}, \tag {64} +$$ + +$$ +d \hat {X} _ {t} = \left(\frac {1}{2} \beta_ {t} (\bar {X} - \hat {X} _ {t}) - \beta_ {t} s _ {\theta} (\hat {X} _ {t}, \bar {X}, t)\right) d t + \sqrt {\beta_ {t}} d \overleftrightarrow {W _ {t}}. \tag {65} +$$ + +Using the same method as in Appendix A, we can show that for $s < t$ + +$$ +\operatorname {L a w} \left(X _ {t} | X _ {s}\right) = \mathcal {N} \left(\gamma_ {s, t} X _ {s} + \left(1 - \gamma_ {s, t}\right) \bar {X}, \left(1 - \gamma_ {s, t} ^ {2}\right) \mathrm {I}\right), \quad \gamma_ {s, t} = e ^ {- \frac {1}{2} \int_ {s} ^ {t} \beta_ {u} d u}. \tag {66} +$$ + +With the following notation: + +$$ +\mu_ {s, t} = \gamma_ {s, t} \frac {1 - \gamma_ {0 , s} ^ {2}}{1 - \gamma_ {0 , t} ^ {2}}, \quad \nu_ {s, t} = \gamma_ {0, s} \frac {1 - \gamma_ {s , t} ^ {2}}{1 - \gamma_ {0 , t} ^ {2}}, \quad \sigma_ {s, t} ^ {2} = \frac {\left(1 - \gamma_ {0 , s} ^ {2}\right) \left(1 - \gamma_ {s , t} ^ {2}\right)}{1 - \gamma_ {0 , t} ^ {2}} \tag {67} +$$ + +we can write down the parameters of Gaussian distribution $X_{s}|X_{t},X_{0}$ .. + +$$ +\mathbb {E} \left[ X _ {s} \mid X _ {t}, X _ {0} \right] = \bar {X} + \mu_ {s, t} \left(X _ {t} - \bar {X}\right) + \nu_ {s, t} \left(X _ {0} - \bar {X}\right), \operatorname {V a r} \left(X _ {s} \mid X _ {t}, X _ {0}\right) = \sigma_ {s, t} ^ {2} \mathrm {I}. \tag {68} +$$ + +The Lemma 1 for MR-VP DPMs takes the following shape: + +$$ +\begin{array}{l} s _ {\theta^ {*}} (x, \bar {X}, t) = - \frac {1}{1 - \gamma_ {0 , t} ^ {2}} \left(x - (1 - \gamma_ {0, t}) \bar {X} - \gamma_ {0, t} \mathbb {E} _ {p _ {0 | t} (\cdot | x)} X _ {0}\right) \\ = - \frac {1}{1 - \gamma_ {0 , t} ^ {2}} \left((x - \bar {X}) - \gamma_ {0, t} \left(\mathbb {E} _ {p _ {0 | t} (\cdot | x)} X _ {0} - \bar {X}\right)\right). \tag {69} \\ \end{array} +$$ + +The class of reverse SDE solvers we consider is + +$$ +\hat {X} _ {t - h} = \hat {X} _ {t} + \beta_ {t} h \left(\left(\frac {1}{2} + \hat {\omega} _ {t, h}\right) (\hat {X} _ {t} - \bar {X}) + (1 + \hat {\kappa} _ {t, h}) s _ {\theta} (\hat {X} _ {t}, \bar {X}, t)\right) + \hat {\sigma} _ {t, h} \xi_ {t}, \tag {70} +$$ + +where $t = 1,1 - h,\dots,h$ and $\xi_{t}$ are i.i.d. samples from $\mathcal{N}(0,\mathrm{I})$ . Repeating the argument of the Theorem 1 leads to the following optimal (in terms of likelihood of the forward diffusion sample paths) parameters: + +$$ +\begin{array}{l} \kappa_ {t, h} ^ {*} = \frac {\nu_ {t - h , t} \left(1 - \gamma_ {0 , t} ^ {2}\right)}{\gamma_ {0 , t} \beta_ {t} h} - 1, \quad \omega_ {t, h} ^ {*} = \frac {\mu_ {t - h , t} - 1}{\beta_ {t} h} + \frac {1 + \kappa_ {t , h} ^ {*}}{1 - \gamma_ {0 , t} ^ {2}} - \frac {1}{2}, \tag {71} \\ (\sigma_ {t, h} ^ {*}) ^ {2} = \sigma_ {t - h, t} ^ {2} + \frac {1}{n} \nu_ {t - h, t} ^ {2} \mathbb {E} _ {X _ {t}} \left[ \mathrm {T r} \left(\mathrm {V a r} \left(X _ {0} | X _ {t}\right)\right) \right], \\ \end{array} +$$ + +which are actually the same as the optimal parameters (10) for VP DPM. It is of no surprise since MR-VP DPM and VP-DPM differ only by a constant shift. + +# E REVERSE SUB-VP SDE SOLVER + +Sub-VP DPM is characterized by the following forward and reverse diffusions: + +$$ +d X _ {t} = - \frac {1}{2} \beta_ {t} X _ {t} d t + \sqrt {\beta_ {t} \left(1 - e ^ {- 2 \int_ {0} ^ {t} \beta_ {u} d u}\right)} d \overrightarrow {W _ {t}} , \tag {72} +$$ + +$$ +d \hat {X} _ {t} = \left(- \frac {1}{2} \beta_ {t} \hat {X} _ {t} - \beta_ {t} \left(1 - e ^ {- 2 \int_ {0} ^ {t} \beta_ {u} d u}\right) s _ {\theta} (\hat {X} _ {t}, t)\right) d t + \sqrt {\beta_ {t} \left(1 - e ^ {- 2 \int_ {0} ^ {t} \beta_ {u} d u}\right)} d \overleftarrow {W} _ {t}. \tag {73} +$$ + +Using the same method as in Appendix A, we can show that for $s < t$ + +$$ +\operatorname {L a w} \left(X _ {t} \mid X _ {s}\right) = \mathcal {N} \left(\gamma_ {s, t} X _ {s}, \left(1 + \gamma_ {0, t} ^ {4} - \gamma_ {s, t} ^ {2} \left(1 + \gamma_ {0, s} ^ {4}\right)\right) I\right), \quad \gamma_ {s, t} = e ^ {- \frac {1}{2} \int_ {s} ^ {t} \beta_ {u} d u}. \tag {74} +$$ + +Note that for $s = 0$ this expression simplifies to + +$$ +\operatorname {L a w} \left(X _ {t} \mid X _ {0}\right) = \mathcal {N} \left(\gamma_ {0, t} X _ {0}, \left(1 - \gamma_ {0, t} ^ {2}\right) ^ {2} \mathrm {I}\right). \tag {75} +$$ + +With the following notation: + +$$ +\mu_ {s, t} = \gamma_ {s, t} \left(\frac {1 - \gamma_ {0 , s} ^ {2}}{1 - \gamma_ {0 , t} ^ {2}}\right) ^ {2}, \quad \nu_ {s, t} = \gamma_ {0, s} \frac {1 + \gamma_ {0 , t} ^ {4} - \gamma_ {s , t} ^ {2} \left(1 + \gamma_ {0 , s} ^ {4}\right)}{\left(1 - \gamma_ {0 , t} ^ {2}\right) ^ {2}}, \tag {76} +$$ + +$$ +\sigma_ {s, t} ^ {2} = \frac {(1 - \gamma_ {0 , s} ^ {2}) ^ {2} (1 + \gamma_ {0 , t} ^ {4} - \gamma_ {s , t} ^ {2} (1 + \gamma_ {0 , s} ^ {4}))}{(1 - \gamma_ {0 , t} ^ {2}) ^ {2}} +$$ + +we can write down the parameters of Gaussian distribution $X_{s}|X_{t},X_{0}$ .. + +$$ +\mathbb {E} \left[ X _ {s} \mid X _ {t}, X _ {0} \right] = \mu_ {s, t} X _ {t} + \nu_ {s, t} X _ {0}, \operatorname {V a r} \left(X _ {s} \mid X _ {t}, X _ {0}\right) = \sigma_ {s, t} ^ {2} \mathrm {I}. \tag {77} +$$ + +The Lemma 1 for sub-VP DPMs takes the following shape: + +$$ +s _ {\theta^ {*}} (x, t) = - \frac {1}{(1 - \gamma_ {0 , t} ^ {2}) ^ {2}} \left(x - \gamma_ {0, t} \mathbb {E} _ {p _ {0 | t} (\cdot | x)} X _ {0}\right). \tag {78} +$$ + +The class of reverse SDE solvers we consider is + +$$ +\hat {X} _ {t - h} = \hat {X} _ {t} + \beta_ {t} h \left(\left(\frac {1}{2} + \hat {\omega} _ {t, h}\right) \hat {X} _ {t} + \left(1 + \hat {\kappa} _ {t, h}\right) \left(1 - e ^ {- 2 \int_ {0} ^ {t} \beta_ {u} d u}\right) s _ {\theta} (\hat {X} _ {t}, t)\right) + \hat {\sigma} _ {t, h} \xi_ {t}, \tag {79} +$$ + +where $t = 1,1 - h,\dots,h$ and $\xi_{t}$ are i.i.d. samples from $\mathcal{N}(0,\mathrm{I})$ . Repeating the argument of the Theorem 1 leads to the following optimal (in terms of likelihood of the forward diffusion sample paths) parameters: + +$$ +\kappa_ {t, h} ^ {*} = \frac {\nu_ {t - h , t} \left(1 - \gamma_ {0 , t} ^ {2}\right)}{\gamma_ {0 , t} \beta_ {t} h \left(1 + \gamma_ {0 , t} ^ {2}\right)} - 1, \quad \omega_ {t, h} ^ {*} = \frac {\mu_ {t - h , t} - 1}{\beta_ {t} h} + \frac {\left(1 + \kappa_ {t , h} ^ {*}\right) \left(1 + \gamma_ {0 , t} ^ {2}\right)}{1 - \gamma_ {0 , t} ^ {2}} - \frac {1}{2}, \tag {80} +$$ + +$$ +(\sigma_ {t, h} ^ {*}) ^ {2} = \sigma_ {t - h, t} ^ {2} + \frac {1}{n} \nu_ {t - h, t} ^ {2} \mathbb {E} _ {X _ {t}} \left[ \mathrm {T r} \left(\mathrm {V a r} (X _ {0} | X _ {t})\right) \right]. +$$ + +# F REVERSE VESDE SOLVER + +VE DPM is characterized by the following forward and reverse diffusions: + +$$ +d X _ {t} = \sqrt {\left(\sigma_ {t} ^ {2}\right) ^ {\prime}} d \overrightarrow {W _ {t}}, \tag {81} +$$ + +$$ +d \hat {X} _ {t} = - \left(\sigma_ {t} ^ {2}\right) ^ {\prime} s _ {\theta} \left(\hat {X} _ {t}, t\right) d t + \sqrt {\left(\sigma_ {t} ^ {2}\right) ^ {\prime}} d \overleftarrow {W} _ {t}. \tag {82} +$$ + +Since for $s < t$ + +$$ +X _ {t} = X _ {s} + \int_ {s} ^ {t} \sqrt {\left(\sigma_ {u} ^ {2}\right) ^ {\prime}} d \overrightarrow {W _ {u} ^ {\prime}}, \tag {83} +$$ + +similar argument as in Appendix A allows showing that + +$$ +\operatorname {L a w} \left(X _ {t} \mid X _ {s}\right) = \mathcal {N} \left(X _ {s}, \mathrm {I} \cdot \int_ {s} ^ {t} \left(\sigma_ {u} ^ {2}\right) ^ {\prime} d u\right) = \mathcal {N} \left(X _ {s}, \left(\sigma_ {t} ^ {2} - \sigma_ {s} ^ {2}\right) \mathrm {I}\right). \tag {84} +$$ + +With the following notation: + +$$ +\mu_ {s, t} = \frac {\sigma_ {s} ^ {2} - \sigma_ {0} ^ {2}}{\sigma_ {t} ^ {2} - \sigma_ {0} ^ {2}}, \quad \nu_ {s, t} = \frac {\sigma_ {t} ^ {2} - \sigma_ {s} ^ {2}}{\sigma_ {t} ^ {2} - \sigma_ {0} ^ {2}}, \quad \sigma_ {s, t} ^ {2} = \frac {(\sigma_ {t} ^ {2} - \sigma_ {s} ^ {2}) (\sigma_ {s} ^ {2} - \sigma_ {0} ^ {2})}{\sigma_ {t} ^ {2} - \sigma_ {0} ^ {2}}, \tag {85} +$$ + +we can write down the parameters of Gaussian distribution $X_{s}|X_{t},X_{0}$ .. + +$$ +\mathbb {E} \left[ X _ {s} \mid X _ {t}, X _ {0} \right] = \mu_ {s, t} X _ {t} + \nu_ {s, t} X _ {0}, \operatorname {V a r} \left(X _ {s} \mid X _ {t}, X _ {0}\right) = \sigma_ {s, t} ^ {2} \mathrm {I}. \tag {86} +$$ + +The Lemma 1 for VE DPMs takes the following shape: + +$$ +s _ {\theta^ {*}} (x, t) = - \frac {1}{\sigma_ {t} ^ {2} - \sigma_ {0} ^ {2}} \left(x - \mathbb {E} _ {p _ {0 | t} (\cdot | x)} X _ {0}\right). \tag {87} +$$ + +Repeating the argument of the Theorem 1 leads to the following optimal (in terms of likelihood of the forward diffusion sample paths) reverse SDE solver: + +$$ +\hat {X} _ {t - h} = \hat {X} _ {t} + \left(\sigma_ {t} ^ {2} - \sigma_ {t - h} ^ {2}\right) s _ {\theta} \left(\hat {X} _ {t}, t\right) + \sigma_ {t, h} ^ {*} \xi_ {t}, \tag {88} +$$ + +where + +$$ +\left(\sigma_ {t, h} ^ {*}\right) ^ {2} = \frac {\left(\sigma_ {t} ^ {2} - \sigma_ {t - h} ^ {2}\right) \left(\sigma_ {t - h} ^ {2} - \sigma_ {0} ^ {2}\right)}{\sigma_ {t} ^ {2} - \sigma_ {0} ^ {2}} + \frac {1}{n} \nu_ {t - h, t} ^ {2} \mathbb {E} _ {X _ {t}} \left[ \operatorname {T r} \left(\operatorname {V a r} \left(X _ {0} | X _ {t}\right)\right) \right], \tag {89} +$$ + +$t = 1,1 - h,..,h$ and $\xi_{t}$ are i.i.d. samples from $\mathcal{N}(0,\mathrm{I})$ + +# G TOY EXAMPLES + +In this section we consider toy examples where data distribution $X_0$ is represented by a single point (corresponding to the case (ii) of the Theorem 1) and by two points (corresponding to more general case (i)). In the first case the point is unit vector $i = (1,1,\dots,1)$ of dimensionality 100, in the second one two points $i$ and $-2i$ have the same probability. We compare performance of two solvers, Euler-Maruyama and the proposed Maximum Likelihood, depending on the number $N \in \{1,2,5,10,100,1000\}$ of solver steps. The output of the perfectly trained score matching network $s_{\theta^*}$ is computed analytically and Gaussian noise with variance $\varepsilon \in \{0.0,0.1,0.5\}$ is added to approximate the realistic case when the network $s_\theta$ we use is not trained till optimality. We considered VP diffusion model (6) with $\beta_0 = 0.05$ and $\beta_1 = 20.0$ . + +The results of the comparison are given in Table 5 and can be summarized in the following: + +for both methods, larger $N$ means better quality; +- for both methods, more accurate score matching networks (smaller $\varepsilon$ ) means better quality; +- for large number of steps, both methods perform the same; +- it takes less number of steps for the proposed Maximum Likelihood solver to converge with a good accuracy to data distribution than it does for Euler-Maruyama solver; + +Table 5: Maximum Likelihood (ML) and Euler-Maruyama (EM) solvers comparison in terms of Mean Square Error (MSE). MSE $< 0.001$ is denoted by conv, MSE $> 1.0$ is denoted by div. $N$ is the number of SDE solver steps, $\varepsilon$ is variance of Gaussian noise added to perfect scores $s_{\theta^{*}}$ . + +
MSE ML / EMX0 = {i}X0 = {i, -2i}
ε = 0.0ε = 0.1ε = 0.5ε = 0.0ε = 0.1ε = 0.5
N = 1conv / divdiv / divdiv / divdiv / divdiv / divdiv / div
N = 2conv / divdiv / divdiv / div0.15 / divdiv / divdiv / div
N = 5conv / div0.017 / div0.085 / divconv / div0.017 / div0.085 / div
N = 10conv / 0.570.001/0.590.005/0.67conv / 0.570.001/0.590.006/0.67
N = 100conv / 0.01conv / 0.01conv / 0.01conv / 0.01conv / 0.01conv / 0.01
N = 1000conv / convconv / convconv / convconv / convconv / convconv / conv
+ +- in accordance with the statement (ii) of the Theorem 1, the optimal Maximum Likelihood solver leads to exact data reconstruction in the case when data distribution is constant and score matching network is trained till optimality (i.e. $\varepsilon = 0.0$ ) irrespective of the number of steps $N$ . + +Also, in the second example where $X_0 \in \{i, -2i\}$ the Maximum Likelihood SDE solver reconstructs the probabilities of these two points better than Euler-Maruyama which tends to output "i-samples" (which are closer to the origin) more frequently than "−2i-samples". E.g. for $\varepsilon = 0.0$ and $N = 10$ the frequency of "i-samples" generated by Euler-Maruyama scheme is $54\%$ while this frequency for Maximum Likelihood scheme is $50\%$ ( $500k$ independent runs were used to calculate these frequencies). + +# H SPEAKER CONDITIONING NETWORK + +The function $x \cdot \tanh(\text{softplus}(x))$ is used as a non-linearity in the speaker conditioning network $g_{t}(Y)$ . First, time embedding $t_{e}$ is obtained by the following procedure: time $t \in [0,1]$ is encoded with positional encoding (Song et al., 2021c), then resulting 256-dimensional vector $t'$ is passed through the first linear module with 1024 units, then a non-linearity is applied to it and then it is passed through the second linear module with 256 units. Next, noisy mel-spectrogram $Y_{t}$ for wodyn input type or $Y_{t}$ concatenated with $\{Y_{s}|s = 0.5 / 15, 1.5 / 15,.., 14.5 / 15\}$ for whole is passed through 6 blocks consisting of 2D convolutional layers each followed by instance normalization and Gated Linear Unit. The number of input and output channels of these convolutions is (1,64), (32,64), (32,128), (64,128), (64,256), (128,256) for wodyn input type and the same but with 16 input channels in the first convolution for whole input type. After the 2nd and 4th blocks $MLP_{1}(t_{e})$ and $MLP_{2}(t_{e})$ are broadcast-added where $MLP_{1}(MLP_{2})$ are composed of a nonlinearity followed by a linear module with 32 (64) units. After the last 6th block the result is passed through the final convolution with 128 output channels and average pooling along both time and frequency axes is applied resulting in 128-dimensional vector. All convolutions except for the final one have (kernel, stride, zero padding) = (3,1,1) while for the final one the corresponding parameters are (1,0,0). Denote the result of such processing of $Y$ by $c$ for wodyn and whole input types. + +Clean target mel-spectrogram $Y_{0}$ is used to obtain 256-dimensional speaker embedding $d$ with the pre-trained speaker verification network (Jia et al., 2018) which is not trained. Vectors $d, c$ and $t'$ are concatenated (except for $d$ -only input type where we concatenate only $d$ and $t'$ ), passed through a linear module with 512 units followed by a non-linearity and a linear module with 128 units. The resulting 128-dimensional vector is the output of the speaker conditioning network $g_{t}(Y)$ . + +# I TRAINING HYPERPARAMETERS AND OTHER DETAILS + +Encoders and decoders were trained with batch sizes 128 and 32 and Adam optimizer with initial learning rates 0.0005 and 0.0001 correspondingly. Encoders and decoders in VCTK models were trained for 500 and 200 epochs respectively; as for LibriTTS models, they were trained for 300 and + +110 epochs. The datasets were downsampled to $22.05\mathrm{kHz}$ which was the operating rate of our VC models. VCTK recordings were preprocessed by removing silence in the beginning and in the end of utterances. To fit GPU memory, decoders were trained on random speech segments of approximately 1.5 seconds rather than on the whole utterances. Training segments for reconstruction and the ones used as input to the speaker conditioning network $g_{t}(Y)$ were different random segments extracted from the same training utterances. Noise schedule parameters $\beta_0$ and $\beta_{1}$ were set to 0.05 and 20.0. + +Our VC models operated on mel-spectrograms with 80 mel features and sampling rate $22.05\mathrm{kHz}$ . Short-Time Fourier Transform was used to calculate spectra with 1024 frequency bins. Hann window of length 1024 was applied with hop size 256. + +For Diff-LibriTTS models we used simple spectral subtraction algorithm in mel domain with spectral floor parameter $\beta = 0.02$ as post-processing to reduce background noise sometimes produced by these models. Noise spectrum was estimated on speech fragments automatically detected as the ones corresponding to silence in source mel-spectrogram. + +# J DETAILS OF AMT TESTS + +For fair comparison with the baselines all the recordings were downsampled to $16\mathrm{kHz}$ ; we also normalized their loudness. In speech naturalness tests workers chosen by geographic criterion were asked to assess the overall quality of the synthesized speech, i.e to estimate how clean and natural (human-sounding) it was. Five-point Likert scale was used: 1 - "Bad", 2 - "Poor", 3 - "Fair", 4 - "Good", 5 - "Excellent". Assessors were asked to wear headphones and work in a quiet environment. As for speaker similarity tests, workers were asked to assess how similar synthesized samples sounded to target speech samples in terms of speaker similarity. Assessors were asked not to pay attention to the overall quality of the synthesized speech (e.g. background noise or incorrect pronunciation). Five-point scale was used: 1 - "Different: absolutely sure", 2 - "Different: moderately sure", 3 - "Cannot decide more same or more different", 4 - "Same: moderately sure", 5 - "Same: absolutely sure". \ No newline at end of file diff --git a/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/images.zip b/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2beb11f7727a4b42d0404691b9f6f32b34fb848f --- /dev/null +++ b/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ceca3be985ddde51246a56ad700fbfb8d400b3c85947f33f517f66dc3de6f9b +size 1495597 diff --git a/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/layout.json b/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ec7a361867717ce6f1b347b4aedd52af1ddc17c5 --- /dev/null +++ b/diffusionbasedvoiceconversionwithfastmaximumlikelihoodsamplingscheme/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37f6055245897ec0f4ecce50bf5e616524b19ca15feb9132afed4e5bdd6c1a63 +size 905702 diff --git a/discoveringandexplainingtherepresentationbottleneckofdnns/7c6e37ad-8cb6-4ebd-9150-27653baf83f9_content_list.json b/discoveringandexplainingtherepresentationbottleneckofdnns/7c6e37ad-8cb6-4ebd-9150-27653baf83f9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5657c6dba605b8c16c59db4351122993cd9c7b19 --- /dev/null +++ b/discoveringandexplainingtherepresentationbottleneckofdnns/7c6e37ad-8cb6-4ebd-9150-27653baf83f9_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d6f4dfc6ec7ddc5afd957b222c6cb9c6971a15b8b4903caeb02a02e32ebfedd +size 175439 diff --git a/discoveringandexplainingtherepresentationbottleneckofdnns/7c6e37ad-8cb6-4ebd-9150-27653baf83f9_model.json b/discoveringandexplainingtherepresentationbottleneckofdnns/7c6e37ad-8cb6-4ebd-9150-27653baf83f9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1295e364917f633a964a60656c7f322afce10ef1 --- /dev/null +++ b/discoveringandexplainingtherepresentationbottleneckofdnns/7c6e37ad-8cb6-4ebd-9150-27653baf83f9_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7c19c1a08c502317554fff07804c93aefdf1bed112dd2fc1d2f946d23dd778ac +size 201763 diff --git a/discoveringandexplainingtherepresentationbottleneckofdnns/7c6e37ad-8cb6-4ebd-9150-27653baf83f9_origin.pdf b/discoveringandexplainingtherepresentationbottleneckofdnns/7c6e37ad-8cb6-4ebd-9150-27653baf83f9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..507c22e10c6b0ba290351376a847a577d0d80757 --- /dev/null +++ b/discoveringandexplainingtherepresentationbottleneckofdnns/7c6e37ad-8cb6-4ebd-9150-27653baf83f9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38b66be98bf58384863a24321b98b195e3e18f09134ba9ec4b27d42bf3158c15 +size 13811391 diff --git a/discoveringandexplainingtherepresentationbottleneckofdnns/full.md b/discoveringandexplainingtherepresentationbottleneckofdnns/full.md new file mode 100644 index 0000000000000000000000000000000000000000..dfc9ddbf34e89efaf4a0c2b8740ef842d91920d9 --- /dev/null +++ b/discoveringandexplainingtherepresentationbottleneckofdnns/full.md @@ -0,0 +1,720 @@ +# DISCOVERING AND EXPLAINING THE REPRESENTATION BOTTLENECK OF DNNS + +Huiqi Deng\*, Qihan Ren\*, Hao Zhang, Quanshi Zhang† + +Shanghai Jiao Tong University + +{denghq7,renqihan,1603023-zh,zqs1022}@sjtu.edu.cn + +# ABSTRACT + +This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs. To this end, we focus on the multi-order interaction between input variables, where the order represents the complexity of interactions. We discover that a DNN is more likely to encode both too simple and too complex interactions, but usually fails to learn interactions of intermediate complexity. Such a phenomenon is widely shared by different DNNs for different tasks. This phenomenon indicates a cognition gap between DNNs and humans, and we call it a representation bottleneck. We theoretically prove the underlying reason for the representation bottleneck. Furthermore, we propose losses to encourage/penalize the learning of interactions of specific complexities, and analyze the representation capacities of interactions of different complexities. The code is available at https://github.com/Nebularaid2000/bottleneck. + +# 1 INTRODUCTION + +The revolution from shallow to deep models is a crucial step in the development of artificial intelligence. DNNs usually exhibit superior performance to shallow models, which is generally believed as a result of the improvement of the representation power (Pascanu et al., 2013; Montúfar et al., 2014). To this end, instead of considering previous issues of the accuracy and the generalization ability of DNNs, we focus on the following two questions about the representation capacity: + +- Are there any common tendencies of DNNs in representing specific types of features? +- Does a DNN encode similar visual concepts as human beings for image classification? + +In order to answer the above two questions, we first investigate the bottleneck of feature representation, i.e., which types of concepts are likely to be encoded by a DNN, and which types of concepts are difficult to be learned. To this end, we discover that the interaction between input variables is an effective tool to analyze the feature representation. It is because instead of considering input variables working independently, the DNN encodes the interaction between input variables to form an interaction pattern for inference. For example, the inference of a face image can be explained as the interactions between left and right eyes, between nose and mouth, etc. + +As the answers to the above questions, we discover a common representation bottleneck of DNNs in encoding interactions, i.e., a DNN is more likely to encode both too complex and too simple interactions, instead of encoding interactions of intermediate complexity. This bottleneck also indicates a dramatic difference between the inferences of DNNs and humans. + +The interaction can be understood as follows. Let us take the face recognition task for example. Let $\phi_{i = \mathrm{mouth}}$ measure the numerical importance of the mouth region $i$ to the classification score. Then, the interaction utility between the mouth region $i$ and the nose region $j$ is measured as the change of the $\phi_{i = \mathrm{mouth}}$ value by the presence or absence of the nose region $j$ . If the presence of $j$ increases the importance $\phi_{i = \mathrm{mouth}}$ by 0.1, then, we consider 0.1 as the utility of the interaction between $i$ and $j$ . + +![](images/00c12fc4a9f39a4a36e5ec7c0d55b55b84aa01a6219d8e1c47676b482bab24df.jpg) + +![](images/ea8fc9dfd009f3514cd78269ecd607e7457da644576a091c62a886f14230196e.jpg) +Figure 1: (a) Five pixels $(g, h, i, j, k)$ interact with each other, forming an edge pattern for classification. (b) Representation bottleneck. A DNN is likely to encode low-order and high-order interactions, but usually fails to learn middle-order interactions. (c) The cognition gap between DNNs and humans. Humans extract little information from a few image patches (e.g., $5\%$ patches). Also, given almost all patches (e.g., $90\%$ patches), people learn little new information from the additional $5\%$ patches since the information is already redundant for human recognition. In comparison, the DNN encodes most interactions when the DNN is given very few patches or most patches. + +![](images/641c87d297085676cd3c800231e7baaf43f5e371b8c8cbd741859f23fcd3ce89.jpg) +(a) Interaction pattern(b) Representation bottleneck(c) Whether humans/DNNs extract new information from patches. + +Multi-order interactions. In order to represent the interaction complexity mentioned in the representation bottleneck, we use the multi-order interaction utility between variables $i,j$ proposed by Zhang et al. (2020). The interaction of the $m$ -th order $I^{(m)}(i,j)$ measures the average interaction utility between variables $i,j$ on all contexts consisting of $m$ variables. In this way, the order $m$ reflects the contextual complexity of the interaction. A low-order $I^{(m)}(i,j)$ measures the relatively simple collaboration between variables $i,j$ , and a few $m$ contextual variables, while a high-order $I^{(m')}(i,j)$ corresponds to the complex collaboration between $i,j$ and $m'$ massive contextual variables, where $m' \gg m$ . + +Moreover, we prove that the multi-order interaction is a trustworthy tool to analyze the representation capacity of DNNs. Specifically, the output score of a DNN can be decomposed into utilities of compositional multi-order interactions between different pairs of variables, i.e., model output $= \sum_{m=0}^{n-2} \sum_{i,j \in N, i \neq j} w^{(m)} I^{(m)}(i,j) + \sum_{i \in N} \text{local utility of } i + \text{bias}$ . For example, the inference score of a face can be decomposed into the interaction utility between left and right eyes, between mouth and nose, etc. Therefore, we can take the utility $I^{(m)}(i,j)$ as the underlying reason to explain the DNN, because each interaction makes a compositional contribution to the output. + +Representation bottleneck of DNNs. Surprisingly, the above decomposition of multi-order interactions enables us to discover a representation bottleneck of DNNs. As Figure 1(b) shows, low-order and high-order interaction utilities $I^{(m)}(i,j)$ usually have high absolute values, while middle-order interaction utilities $I^{(m)}(i,j)$ usually have low absolute values. In other words, a DNN is more likely to encode the interaction between variables $i$ and $j$ , when $i,j$ interact with a few contextual variables. Similarly, it is also easy for the DNN to learn the interaction, when $i,j$ interact with most contextual variables. However, it is difficult for the DNN to learn the interaction, when $i,j$ cooperate with a medium number of contextual variables. The difficulty of learning middle-order interactions reflects a representation bottleneck of DNNs. + +Cognitive gap between DNNs and humans. Such a representation bottleneck also indicates a significant gap between the concepts encoded by DNNs and the visual cognition of humans. As Figure 1(c) shows, people usually cannot extract meaningful information from a few image patches. Besides, if people are given almost all patches, then the information is already too redundant and inserting additional patches will bring in little new information. In contrast, the DNN encodes most information, when the DNN is given only a few patches or is given most patches. + +Theoretical proof. In this paper, we theoretically prove the mechanism that is responsible for the representation bottleneck. Such proof also enables us to simulate the distribution of interactions of different orders, which well matches the distribution of interactions in real applications. + +Beyond the theoretical proof, another important issue is how to guide the learning of feature representation in DNNs by learning interactions of specific orders. We propose two losses to encourage/penalize the DNN to make inferences by interactions of specific orders, thereby boosting/preventing the learning of such interactions. Experimental results have validated the effectiveness of the two losses. Next, we investigate the representation capacities of several DNNs which encoded interactions of different orders. We find that the DNNs mainly encoding high-order interactions represent more structural information than the normally trained DNNs. In addition, high-order interactions were vulnerable to adversarial attacks. + +In summary, this paper makes three contributions: + +- This study discovers a representation bottleneck phenomenon of DNNs, i.e., it is difficult for a DNN to learn middle-order interactions. It also clearly proves that DNNs and humans use different types of visual concepts for inference. +- We theoretically prove the underlying reason for the representation bottleneck. +- We design two losses to encourage/penalize the DNN to learn interactions of specific orders. Experiments have validated the effectiveness of the proposed losses. Besides, we investigate the representation capacities of DNNs which encode interactions of different orders. + +# 2 RELATED WORK + +The representation capacity of DNNs. The evaluation of the representation capacity of DNNs provides a new perspective to explain and analyze DNNs. Pascanu et al. (2013) and Montúfar et al. (2014) used the number of linear response regions in a deep rectifier MLP to evaluate its representation capacity. The information bottleneck theory (Shwartz-Ziv & Tishby, 2017) used the mutual information to explain how DNNs gradually learned the information during the training process. Achille & Soatto (2018), Amjad & Geiger (2019) and Hjelm et al. (2019) further improved the representation capacity by optimizing mutual information. Arpit et al. (2017) studied the memorization behavior of DNNs during training to analyze the feature representations. Xu (2018) proposed Fourier analysis to understand the generalization. In addition, several metrics were proposed to analyze the generalization capacity or robustness of DNNs, including the stiffness (Fort et al., 2019), the sensitivity (Novak et al., 2018), and the CLEVER score (Weng et al., 2018). Neyshabur et al. (2017) examined whether existing complexity measures can guarantee generalization. + +Previous researches mainly studied the theoretical maximum complexity, generalization ability, and robustness of DNNs. In comparison, our research focuses on the limitation of DNNs in feature representations, i.e., which types of interactions are unlikely to be encoded. + +Interactions. Interactions between input variables of a DNN have been widely investigated in recent years. Based on the Shapley value (Shapley, 1951), Grabisch & Roubens (1999) proposed the Shapley interaction index to define the interaction in a cooperative game. Lundberg et al. (2018) used the interaction to build tree ensemble explanations for DNNs. Janizek et al. (2021) extended Integrated Gradients (Sundararajan et al., 2017) to explain the pairwise feature interaction in DNNs. Sundararajan et al. (2020) proposed the Shapley Taylor interaction to measure interactions among multiple variables. Tsang et al. (2020) and Tsang et al. (2017) interpreted DNNs by detecting statistical interactions between input variables and interactions between network weights, respectively. Peebles et al. (2020) and Tsang et al. (2018) achieved the disentanglement of features by restricting interactions. Song et al. (2019) and Lian et al. (2018) designed network architectures to effectively learn feature interactions. Lengerich et al. (2020) applied the ANOVA technique to measure interactions and further explored the relationship between dropout and interactions. Zhang et al. (2020) proposed the multi-order interaction, and used it to understand and boost dropout. + +Besides, the team of Dr. Quanshi Zhang has adopted the game-theoretic interactions to build up a theoretical system to explain the representation capacity of a DNN, including explaining the generalization ability (Zhang et al., 2020), the adversarial transferability and adversarial attacks (Wang et al., 2021b;a) of a DNN, and explaining concepts encoded in a DNN (Cheng et al., 2021; Ren et al., 2021a; Zhang et al., 2021b;a). + +# 3 REPRESENTATION BOTTLENECK + +Before the analysis of the representation bottleneck, let us first introduce multi-order interactions between input variables, which are encoded in a DNN. Given a pre-trained DNN $v$ and an input sample with a set of $n$ variables $N = \{1, \dots, n\}$ (e.g., an input image with $n$ pixels), $v(N)$ denotes the network output of all input variables. Input variables of DNNs usually interact with each other to make inferences, instead of working individually. In this study, we mainly discuss the pairwise interactions. For example, as Figure 1(a) shows, pixels $i, j \in N$ interact with each other, forming an edge pattern for classification. If the existence of this pattern increases the network output by 0.01, we consider this pattern has a positive utility of 0.01. Similarly, if the existence of this pattern decreases the network output, we consider this pattern has a negative utility. + +Furthermore, the multi-order interaction $I^{(m)}(i,j)$ between two input variables $i,j\in N,0\leq m\leq n - 2$ , was proposed to measure interactions of different complexities (Zhang et al., 2020). Specifically, the $m$ -th order interaction $I^{(m)}(i,j)$ measures the average interaction utility between variables $i,j$ under all possible contexts consisting of $m$ variables. Therefore, the order $m$ can be considered to represent the contextual complexity of the interaction. For example, as Figure 1(a) shows, five pixels $(i,j,g,h,k)$ collaborate with each other and form an edge pattern for classification. Thus, the pairwise interaction between pixels $i,j$ also depends on the three contextual pixels $g,h,k$ . Mathematically, the multi-order interaction $I^{(m)}(i,j)$ is defined as follows: + +$$ +I ^ {(m)} (i, j) = \mathbb {E} _ {S \subseteq N \backslash \{i, j \}, | S | = m} [ \Delta v (i, j, S) ], \tag {1} +$$ + +where $\Delta v(i,j,S) = v(S\cup \{i,j\}) - v(S\cup \{i\}) - v(S\cup \{j\}) + v(S)$ . Here, $v(S)$ is the output score when we keep variables in $S\subseteq N$ unchanged but replace variables in $N\setminus S$ by the baseline value. The baseline value follows the widely-used setting in Ancona et al. (2019), which is set as the average value of the variable over different samples. Let us take the multi-category image classification for example. Given an input image $x$ , $v(S) = v(x_{S})$ can be implemented as any scalar output of the DNN (e.g., $\log \frac{P(\hat{y} = y^{\mathrm{truth}}|x_S)}{1 - P(\hat{y} = y^{\mathrm{truth}}|x_S)}$ of the true category), where we replace the pixel values in $N\setminus S$ of original input $x$ by the baseline value (the average pixel value over images) to construct a masked image $x_{S}$ . Then, $\Delta v(i,j,S) = [v(S\cup \{i,j\}) - v(S\cup \{i\})] - [v(S\cup \{j\}) - v(S)]$ quantifies the marginal effects (the importance) of the variable $j$ that are changed by the presence or absence of the variable $i$ . It represents the utilities of the collaboration between $i,j$ in a context $S$ . + +Generic metric. The proposed multi-order interaction $I^{(m)}(i,j)$ is a generic metric, which has a strong connection with the Shapley value (Shapley, 1951) and the Shapley interaction index (Sundararajan et al., 2020) in game theory. In addition, it has been proven that $I^{(m)}(i,j)$ satisfies the following five desirable properties, i.e., linearity, nullity, commutativity, symmetry, and efficiency properties. The connections with existing metrics and five properties are introduced in Appendix A. + +# 3.1 REPRESENTATION BOTTLENECK + +According to the efficiency property of $I^{(m)}(i,j)$ , we find that the output of a DNN can be explained as the sum of all interaction utilities of different orders between different pairs of variables. + +$$ +v (N) = v (\emptyset) + \sum_ {i \in N} \mu_ {i} + \sum_ {i, j \in N, i \neq j} \sum_ {m = 0} ^ {n - 2} w ^ {(m)} I ^ {(m)} (i, j) \tag {2} +$$ + +where $\mu_{i} = v(\{i\}) - v(\emptyset)$ , and $w^{(m)} = (n - 1 - m) / [n(n - 1)]$ . Because $I^{(m)}(i,j)$ measures the interaction between variables $i$ and $j$ encoded in DNNs with $m$ contextual variables, we can consider the interaction utility $I^{(m)}(i,j)$ as a specific reason for the inference, which makes a compositional contribution $w^{(m)}I^{(m)}(i,j)$ to the output. + +In this way, we can categorize all underlying reasons for the network output into different complexities. Low-order interactions can be considered as simple underlying reasons, relying on very few variables. High-order interactions can be regarded as complex underlying reasons, depending on massive variables. In order to measure the reasoning complexity of the DNN, we measure the relative interaction strength $J^{(m)}$ of the encoded $m$ -th order interaction as follows: + +$$ +J ^ {(m)} = \frac {\mathbb {E} _ {x \in \Omega} \left[ \mathbb {E} _ {i , j} \left[ \left| I ^ {(m)} (i , j | x) \right| \right] \right]}{\mathbb {E} _ {m ^ {\prime}} \left[ \mathbb {E} _ {x \in \Omega} \left[ \mathbb {E} _ {i , j} \left[ \left| I ^ {(m ^ {\prime})} (i , j | x) \right| \right] \right] \right]} \tag {3} +$$ + +where $\Omega$ denotes the set of all samples. $J^{(m)}$ is computed over all pairs of input variables in all samples. $J^{(m)}$ is normalized by the average value of interaction strength. The distribution of $J^{(m)}$ measures the distribution of the complexity of interactions encoded in DNNs. + +Representation bottleneck. Based on the above metric, we discover an interesting phenomenon: a DNN usually encodes strong low-order and high-order interactions, but encodes weak middle-order interactions. Such a phenomenon is shared by different DNN architectures trained on different datasets, which is illustrated by the $J^{(m)}$ curves in Figure 2. Specifically, when the order $m$ is smaller than $0.1n$ or greater than $0.9n$ , the interaction strength $J^{(m)}$ is usually high. In comparison, $J^{(m)}$ is usually low when the order $m$ approximates $0.5n$ . Moreover, Figure 3(a) shows that such a phenomenon does not only exist in well-trained DNNs, but also exists in the entire training process. + +The above phenomenon indicates that a DNN is more likely to learn simple interactions where a few variables (e.g., less than $0.1n$ variables) interact with each other. Similarly, it is easy for a DNN to + +![](images/de07b7c00da95dd528b98348a93ae1a5a9619f11ba794c2edfaf8705c7703287.jpg) +Figure 2: The distributions of interaction strength $J^{(m)}$ of different DNNs trained on various image datasets and tabular datasets. + +encode complex interactions where massive variables (e.g., more than $0.9n$ variables) participate. However, it is difficult for a DNN to learn middle-complex interactions in which a medium number of variables (e.g., about $0.5n$ variables) participate. Let us take Figure 1(c) for an example. When a DNN is given very few patches sparsely distributed on the horse image, the DNN can successfully extract the interaction between the few patches. Similarly, when the DNN is given almost all patches, then the insertion of two new patches will make the DNN trigger strong collaboration between the two patches and massive existing patches. However, when the DNN is just given a half patches, it is difficult for the DNN to encode interactions between the two patches. In a word, the encoded interaction pattern is either too simple or too complex. The difficulty of learning interaction patterns of moderate complexity reflects a common tendency in the feature representation of DNNs. + +Such a representation bottleneck also indicates that DNNs and human beings encode different types of visual patterns for inference. As Figure 1(c) shows, (i) Given very few patches, a DNN can encode much information from low-order interaction patterns between patches. However, it is difficult for people to recognize such low-order interactions. (ii) Given almost all patches of an image, any additional patches are already too redundant for human cognition, so people do not obtain much new information from additional patches. (iii) Given a medium number of patches, the DNN usually extracts little information, while people can extract much information for recognition. + +Implementation details. In order to measure $J^{(m)}$ , we conducted experiments on three image datasets including the ImageNet dataset (Russakovsky et al., 2015), the Tiny-ImageNet dataset (Le & Yang, 2015) and the CIFAR-10 dataset (Krizhevsky et al., 2009). We mainly analyzed several DNNs trained on these datasets for image classification, including AlexNet (Krizhevsky et al., 2012), VGG-16 (Simonyan & Zisserman, 2014) and ResNet-18/20/50/56 (He et al., 2016). Due to the high dimension of input variables ( $n = 224 \times 224$ for ImageNet), the computational cost of $J^{(m)}$ is intolerable. To reduce the computational cost, we split the input image into $16 \times 16$ patches, and considered each patch as an input variable. To compute $J^{(m)}$ , we set $v(S|x) = \log \frac{P(\hat{y} = y^*|x_S)}{1 - P(\hat{y} = y^*|x_S)}$ given the masked sample $x_S$ , where $y^*$ is the true label and $P(\hat{y} = y^*|x_S)$ is the probability of classifying the masked sample $x_S$ to the true category. In the masked sample $x_S$ , pixel values in image patches in $N \setminus S$ were replaced by the average pixel value over different patches in all images, just like in Ancona et al. (2019). Note that $J^{(m)}$ is an average over all possible contexts $S$ , all pairs of variables $(i,j)$ , and all samples $x$ , which is computationally infeasible. Therefore, we approximated $J^{(m)}$ using a sampling strategy (Zhang et al., 2020). Please see Appendix C for sampling details. In addition, we conducted experiments on two tabular datasets, including the UCI census income dataset (census) and the UCI TV news channel commercial detection dataset (commercial) (Dua et al., 2017). Each sample in the two datasets contained $n = 12$ and $n = 10$ input variables, respectively. We analyzed a five-layer MLP (namely, MLP-5) and an eight-layer MLP (namely, MLP-8) network. Each layer except for the output layer contained 100 neurons. In the computation of $J^{(m)}$ , $P(\hat{y} = y^*|x_S)$ was also computed by setting the baseline value of variable to the average value of the variable. Please see Appendix C for details. + +# 3.2 EXPLAINING THE REPRESENTATION BOTTLENECK + +In this subsection, we theoretically prove the underlying reason for the representation bottleneck. Let $W \in \mathbb{R}^{K}$ denote the network parameters of a DNN. We focus on the change $\Delta W$ of network parameters, which also represents the strength of training the DNN. The change of weights is calcu- + +![](images/dbb61b6996d6691155b88c16a4198d695db1755247f3e1f251d5ab022f41e317.jpg) +(a) Dynamics + +![](images/8f048df53da316b7413e3766adbf997d6e5d175a03b5fd914c1e707c3ddd3783.jpg) +Figure 3: (a) Distributions of the interaction strength $J^{(m)}$ of a ResNet-20 model over different orders, which were measured after different training epochs. The DNN was trained on the CIFAR10 dataset. (b) Simulations of the $\hat{J}^{(m)}$ distributions based on $\hat{F}^{(m)}$ curves on the ImageNet dataset. + +![](images/c869e795e52e6448f2c0e43f066e243560558f96542b0d380b6098ae074eda75.jpg) +(b) Simulations + +![](images/b1f3d9cfc9723753189d239444ef0b35c6035189b4aae9d692438694f878d08a.jpg) + +![](images/8fdbcb72a1d0325a115d787bea5fef081a2722ded1a94f804b613f364ef12b4b.jpg) + +lated by $\Delta W = -\eta \frac{\partial L}{\partial W} = -\eta \frac{\partial L}{\partial v(N)}\frac{\partial v(N)}{\partial W}$ . Here, $L$ denotes the loss function, and $\eta$ is the learning rate. According to Eq. (2), the network output $v(N)$ of the DNN can be decomposed into the sum of multi-order interactions $I^{(m)}(i,j)$ . Therefore, $\Delta W$ can be further represented as the sum of gradients $\frac{\partial I^{(m)}(i,j)}{\partial W}$ of multi-order interactions. + +$$ +\Delta W = - \eta \frac {\partial L}{\partial v (N)} \frac {\partial v (N)}{\partial W} = \Delta W _ {U} + \sum_ {m = 0} ^ {n - 2} \sum_ {i, j \in N, i \neq j} \Delta W ^ {(m)} (i, j), \tag {4} +$$ + +where $U = v(\emptyset) + \sum_{i\in N}\mu_i$ . Specifically, + +$$ +\Delta W _ {U} \stackrel {\text {d e f}} {=} - \eta \frac {\partial L}{\partial v (N)} \frac {\partial v (N)}{\partial U} \frac {\partial U}{\partial W}, \quad \Delta W ^ {(m)} (i, j) \stackrel {\text {d e f}} {=} R ^ {(m)} \frac {\partial I ^ {(m)} (i , j)}{\partial W}, +$$ + +where $R^{(m)} = -\eta \frac{\partial L}{\partial v(N)} \frac{\partial v(N)}{\partial I^{(m)}(i,j)}$ . Here, $\Delta W_U$ represents the component of $\Delta W$ w.r.t. $\frac{\partial U}{\partial W}$ , and $\Delta W^{(m)}(i,j)$ represents the component of $\Delta W$ w.r.t. $\frac{\partial I^{(m)}(i,j)}{\partial W}$ . Therefore, besides $\Delta W_U$ , we can consider there are additional $\frac{n(n-1)^2}{2}$ paths w.r.t. different pairs of $i,j$ and different orders $m$ in the backpropagation, and the weight change through each propagation path is $\Delta W^{(m)}(i,j)$ . In this way, we can consider the norm of $\Delta W^{(m)}(i,j)$ (i.e., $||\Delta W^{(m)}(i,j)||_2$ ) measures the strength of learning the interaction between variables $i$ and $j$ under contexts of $m$ variables. + +Theorem 1. (Proof in Appendix B) Assume $\mathbb{E}_{i,j,S}[\frac{\partial\Delta v(i,j,S)}{\partial W} ] = 0$ . Let $\sigma^2$ denote the variance of each dimension of $\frac{\partial\Delta v(i,j,S)}{\partial W}$ . Then, $\mathbb{E}_{i,j}[\Delta W^{(m)}(i,j)] = 0$ and the variance of each dimension of $\Delta W^{(m)}(i,j)$ is $(\eta \frac{\partial L}{\partial v(N)}\frac{n - m - 1}{n(n - 1)})^2\sigma^2 /\binom {n - 2}{m}$ . Therefore, $\mathbb{E}_{i,j}[\| \Delta W^{(m)}(i,j)\| _2^2 ] = K(\eta \frac{\partial L}{\partial v(N)}\frac{n - m - 1}{n(n - 1)})^2\sigma^2 /\binom {n - 2}{m}$ , where $K$ is the dimension of the network parameter $W$ . + +Theorem 1 shows that the strength (i.e., the $l_{2}$ -norm $\| \Delta W^{(m)}(i,j)\| _2$ ) of learning $m$ -order interactions is proportional to $F^{(m)} = \frac{n - m - 1}{n(n - 1)} / \sqrt{\binom{n - 2}{m}}$ . Therefore, when the order $m$ is small or large (e.g., $m = 0.05n$ or $0.95n$ ), the training strength of the $m$ -order interaction is relatively higher. In contrast, when the order $m$ is medium (e.g., $m = 0.5n$ ), the training strength of the $m$ -order interaction is much lower. The above analysis explains why it is easy for a DNN to learn low-order and high-order interactions, but difficult for a DNN to learn middle-order interactions. + +Simulation of the curve of the interaction strength. We find that the above training strength can be used to simulate the distribution of interaction strengths $J^{(m)}$ in real applications, which verifies our theory. Based on Theorem 1, the training strength w.r.t. the order $m$ is proportional to the aforementioned $F^{(m)}$ , so we can use $F^{(m)}$ to simulate $J^{(m)}$ . For fair comparison, we normalized $F^{(m)}$ and $J^{(m)}$ by $\hat{F}^{(m)} = F^{(m)} / F^{(0)}$ and $\hat{J}^{(m)} = J^{(m)} / J^{(0)}$ , such that $\hat{F}^{(0)} = \hat{J}^{(0)} = 1$ . Figure 3(b) shows that the curves of $\hat{F}^{(m)}$ can well match the distributions of $\hat{J}^{(m)}$ . Due to the redundancy of feature representations in DNNs, we usually consider that the actual dimension $n'$ of the latent space of DNNs is much lower than the number $n$ of input variables. Thus, instead of directly using the number $n$ of input variables, we adopted a smaller $n'$ (i.e., $n' < n$ ) in $\hat{F}^{(m)}$ for the simulation. + +# 3.3 METHOD TO CONTROL INTERACTIONS OF SPECIFIC ORDERS + +The representation bottleneck is widely shared by DNNs of different architectures for various tasks, when these DNNs are normally trained. In this section, we mainly explore methods, which force the + +![](images/aa32dca0eb0e6bf19fccbb49724d8558d914081fc5fb0e6dda30424f4e39b852.jpg) +Figure 4: (a) Weight coefficients $\tilde{w}^{(m)}$ of different orders with different pairs of $(r_1, r_2)$ . (b) Distributions of the interaction strength $J^{(m)}$ over different orders. Each curve indicates a AlexNet whose interactions were encouraged/penalized by $L^{+}(r_1, r_2)$ and $L^{-}(r_1, r_2)$ with certain pairs of $(r_1, r_2)$ . (c) Distributions of $J^{(m)}$ of four types of DNNs (AlexNet). The Appendix C provides more results. + +![](images/61505d6bb469944be69230199fb7acc22b4b575d23029cae5f4c4bedea52afe5.jpg) + +![](images/a881d3c74a6e2a37161ec775f14ab588b2dce7d854a553b86311d1fcb7a07d71.jpg) + +DNN to learn interactions of specific orders. In this way, we can investigate the properties of feature representations of such DNNs. + +In order to force the DNN to learn interactions of specific orders, we propose two simple-yet-efficient losses in the training process. The two losses encourage and penalize interactions of specific orders, respectively. Before designing the two losses, let us focus on the output change $\Delta u(r_1,r_2)$ : + +$$ +\Delta u \left(r _ {1}, r _ {2}\right) = \mathbb {E} _ {S _ {1}, S _ {2}: \emptyset \subseteq S _ {1} \subseteq S _ {2} \subseteq N} \left[ v \left(S _ {2}\right) - r _ {2} / r _ {1} \cdot v \left(S _ {1}\right) \right], \tag {5} +$$ + +where the subsets $S_{1}$ and $S_{2}$ are randomly sampled from all input variables $N$ , such that $\emptyset \subseteq S_{1} \subsetneq S_{2} \subseteq N$ , $|S_{1}| = r_{1}n$ , $|S_{2}| = r_{2}n$ , and $0 \leq r_{1} < r_{2} \leq 1$ . + +Theorem 2. (Proof in Appendix B) The output change $\Delta u(r_1, r_2)$ can be decomposed into the sum of multi-order interactions between different pairs of variables. + +$$ +\Delta u \left(r _ {1}, r _ {2}\right) = \left(1 - r _ {2} / r _ {1}\right) v (\emptyset) + \sum_ {m = 0} ^ {n - 2} \sum_ {i, j \in N, i \neq j} \tilde {w} ^ {(m)} I ^ {(m)} (i, j) +$$ + +$$ +\text {w h e r e} \quad \tilde {w} ^ {(m)} = \left\{ \begin{array}{l l} (r _ {2} / r _ {1} - 1) (m + 1) / [ n (n - 1) ], & m \leq r _ {1} n - 2 \\ (r _ {2} n - m - 1) / [ n (n - 1) ], & r _ {1} n - 2 < m \leq r _ {2} n - 2 \\ 0, & r _ {2} n - 2 < m \leq n - 2 \end{array} \right. \tag {6} +$$ + +Interestingly, as Figure 4(a) shows, we can consider that the output change $\Delta u(r_1,r_2)$ mainly encodes interactions whose orders are in the range of $[0,r_2n]$ . The weight coefficient $\tilde{w}^{(m)}$ of the $m$ -th order interaction reaches a peak at the $r_1n$ -th order in $\Delta u(r_1,r_2)$ . + +Encourage/penalize interactions of specific orders. Based on the above analysis, $\Delta u(r_1,r_2)$ only contains partial interactions of the $[0,r_2n]$ -th orders. Hence, we propose two losses based on $\Delta u(r_{1},r_{2})$ , which encourage and penalize the DNN to use interactions of specific orders for inference, respectively. The first proposed loss $L^{+}(r_{1},r_{2})$ forces the DNN to mainly use interactions encoded in $\Delta u(r_1,r_2)$ for inference, thereby boosting the learning of these interactions. + +$$ +L ^ {+} \left(r _ {1}, r _ {2}\right) = - \frac {1}{| \Omega |} \sum_ {x \in \Omega} \sum_ {c = 1} ^ {C} P \left(y ^ {*} = c | x\right) \log P (\hat {y} = c | \Delta u _ {c} \left(r _ {1}, r _ {2} \mid x\right)), \tag {7} +$$ + +where $L^{+}(r_{1}, r_{2})$ is the cross entropy that uses $\Delta u(r_{1}, r_{2})$ for classification. Here, $\Omega$ is the training set, and $C$ denotes the number of classes. Given an input image $x \in \Omega$ , $y^{*}$ is the true label, and $\hat{y}$ denotes the predicted label. Here, $\Delta u_{c}(r_{1}, r_{2}|x) = v_{c}(S_{2}|x) - r_{2} / r_{1} \cdot v_{c}(S_{1}|x)$ denotes the change of the logits of the category $c$ , where the logit $v_{c}(S|x)$ denotes the feature dimension corresponding to the $c$ -th category before the softmax layer. The two subsets $S_{1}, S_{2}$ are randomly sampled. In this way, we compute $P(\hat{y} = c|\Delta u_{c}(r_{1}, r_{2}|x))$ as the probability of using the $C$ -dimensional vector $\{\Delta u_{c}(r_{1}, r_{2}|x)|1 \leq c \leq C\}$ to classify the sample to the category $c$ . We input the $C$ -dimensional vector $\{\Delta u_{c}(r_{1}, r_{2}|x)|1 \leq c \leq C\}$ into the softmax layer to compute the probability. + +Besides, the second loss $L^{-}(r_{1}, r_{2})$ is designed to prevent the DNN from encoding interactions of the $[0, r_{2}n]$ -th orders. Specifically, we maximize the entropy of classification based on $\Delta u(r_{1}, r_{2})$ , in order to make $\Delta u(r_{1}, r_{2})$ non-discriminative. + +$$ +L ^ {-} \left(r _ {1}, r _ {2}\right) = \frac {1}{| \Omega |} \sum_ {x \in \Omega} \sum_ {c = 1} ^ {C} P (\hat {y} = c | \Delta u _ {c} (r _ {1}, r _ {2} | x)) \log P (\hat {y} = c | \Delta u _ {c} (r _ {1}, r _ {2} | x)), \tag {8} +$$ + +where $L^{-}(r_{1}, r_{2})$ denotes the minus entropy of the classification probability based on $\Delta u_{c}(r_{1}, r_{2})$ . In this way, we can train a DNN using the following loss, + +$$ +\operatorname {L o s s} = \operatorname {L o s s} _ {\text {c l a s s i f i c a t i o n}} + \lambda_ {1} L ^ {+} \left(r _ {1}, r _ {2}\right) + \lambda_ {2} L ^ {-} \left(r _ {1}, r _ {2}\right) \tag {9} +$$ + +Table 1: (left) Classification accuracies of four types of DNNs, including the normally trained DNNs, and the other three types of DNNs mainly encoding low-order, middle-order, and high-order interactions. (right) Comparison of adversarial accuracies between normally trained DNNs and DNNs mainly encoding high-order interactions on the census dataset and the commercial dataset. + +
CIFAR-10Tiny-ImageNet
ModelAlexNetVGG16VGG19AlexNetVGG16VGG19
Normal training88.5290.5090.6156.0056.1652.56
Low interaction86.9789.9989.7458.6855.6055.04
Mid interaction86.6590.2990.0353.8855.8453.36
High interaction88.6890.8490.7956.1255.3653.28
+ +
ModelNormal trainingPenalize low-order & boost high-order
MLP-5 on census38.227.31
MLP-8 on census39.332.02
MLP-5 on commer27.0122.00
MLP-8 on commer25.9220.58
+ +where $\lambda_1\geq 0,\lambda_2\geq 0$ are two constants to balance the three terms. + +Effects of the two losses. In experiments, we found that the loss $L^{-}(r_{1}, r_{2})$ usually could successfully penalize interactions of the $[r_{1}n, r_{2}n]$ -th orders and $L^{+}(r_{1}, r_{2})$ could encourage interactions of the $[r_{1}n, r_{2}n]$ -th orders, instead of penalizing/encouraging interactions of the $[0, r_{2}n]$ -th orders. Specifically, we conducted experiments as follows. We trained AlexNet on the Tiny-ImageNet dataset to encourage interactions of specific orders without penalizing any interactions by setting $\lambda_{1} = 1$ , $\lambda_{2} = 0$ . Besides, we set $[r_{1} = 0.2, r_{2} = 0.5]$ , $[r_{1} = 0.3, r_{2} = 0.7]$ , and $[r_{1} = 0.6, r_{2} = 0.9]$ in the $L^{+}(r_{1}, r_{2})$ loss to learn three AlexNet models, respectively. We also trained AlexNet models to penalize interactions of specific orders by setting $\lambda_{1} = 0$ , $\lambda_{2} = 1$ . Two DNNs were trained by setting $[r_{1} = 0, r_{2} = 0.2]$ and $[r_{1} = 0, r_{2} = 0.5]$ in the $L^{-}(r_{1}, r_{2})$ loss, respectively. Figure 4(b) shows the interaction strength $J^{(m)}$ of these DNNs. When we encouraged the DNN to encode interactions of the $[r_{1}n, r_{2}n]$ -th orders, the interaction strength $J^{(m)}$ of the $[r_{1}n, r_{2}n]$ -th orders significantly increased, compared to the normally trained DNNs. Figure 4(b) also shows that the loss $L^{-}(r_{1}, r_{2})$ could successfully remove interactions of the $[r_{1}n, r_{2}n]$ -th orders. + +# 3.4 INVESTIGATION OF THE REPRESENTATION CAPACITIES + +In the previous subsection, we introduced two losses, which force the DNN to encode interactions of different orders. In this subsection, we investigate the representation capacities of such DNNs. Thus, we conducted experiments to train four types of DNNs. The first type of DNN was normally trained. The other three types of DNNs were trained to mainly encode low-order, middle-order, and high-order interactions, respectively. Specifically, the second DNN was trained to penalize interactions of the $[0.7n, n]$ -th orders by minimizing the $L^{-}(r_{1}, r_{2})$ loss with $\lambda_{1} = 0$ , $\lambda_{2} = 1$ , $r_{1} = 0.7$ , $r_{2} = 1.0$ . The third DNN was learned to boost interactions of the $[0.3n, 0.7n]$ -th orders by minimizing the $L^{+}(r_{1}, r_{2})$ loss with $\lambda_{1} = 1$ , $\lambda_{2} = 0$ , $r_{1} = 0.3$ , $r_{2} = 0.7$ . The fourth DNN was trained to penalize interactions of the $[0, 0.5n]$ -th orders by minimizing the $L^{-}(r_{1}, r_{2})$ loss with $\lambda_{1} = 0$ , $\lambda_{2} = 1$ , $r_{1} = 0$ , $r_{2} = 0.5$ . The second DNN, the third DNN, and the fourth DNN were termed the low-order DNN, the middle-order DNN and the high-order DNN, respectively. In experiments, we trained three versions of each DNN based on the above four settings. We applied architectures of AlexNet and VGG-16/19 on the CIFAR-10 and the Tiny-ImageNet dataset. + +Figure 4(c) and Figure 8 (in the appendix) show that the trained DNNs successfully learned interactions as expected. In other words, interactions of the $[0.7n, n]$ -th orders were penalized in the low-order DNN. Interactions of the $[0.3n, 0.7n]$ -th orders were boosted in the middle-order DNN. Interactions of the $[0, 0.5n]$ -th orders were penalized in the high-order DNN. + +Classification accuracy. Firstly, Table 1 shows classification performance of the above four types of DNNs. In general, the four types of DNNs achieved similar accuracies. The similar performance indicated that it was not necessary for a DNN to encode low-order interactions and high-order interactions to make inferences. Middle-order interactions could also provide discriminative information. + +Bag-of-words representations vs. structural representations. Theoretically, high-order interactions usually represent the global structure of objects, which requires the complex collaborations of massive input variables. In comparison, low-order interactions learn local patterns from local and simple collaborations of a few input variables. + +Therefore, we conducted two experiments to examine whether the high-order DNN encoded more structural information than the normally trained DNN. Specifically, as Figure 5 shows, in the first ex + +![](images/04b6b8fabb304fc2feda63d6e9a8f26d73e173ab162539e9a64948d103962919.jpg) +Figure 5: Tested images by random masking (left) and centrally-surrounding masking (right). + +![](images/8c16470c302e616c0d69d4a5246d968c724ce83ae4467ed5de09b7588b0232f9.jpg) + +![](images/eab6b3923d04417e7f1600fe5d56d70e84e7088c893dba5be6ecbabcfe23285a.jpg) +Figure 6: Classification accuracies using VGG-16 on images with different numbers of patches being masked. The Appendix C provides more results. + +![](images/b4571f56ef45d528ef0c1bd8350c5d4487007a37c88ad0ea229a275b4e085011.jpg) + +experiment, we tested the DNN on images where $m$ patches in each image were randomly masked. In the second experiment, we tested the DNN on images where $m$ patches in each image on the image boundary were masked, and the patches in the center were preserved. In this way, we consider the structural information in tested images was destroyed in the first experiment, but such information was maintained in the second experiment. Here, in each sub-figure in Figures 6 and 9, we computed the area between the accuracy curve on samples generated by the random masking method and the accuracy curve on samples generated by the centrally-surrounding masking method. The area indicated the sensitivity to the structural destruction. We found that such area of high-order DNNs was much larger than the area of normally trained DNNs. This phenomenon indicated that normally trained DNN usually encoded local patterns, just like the bag-of-words representations, which were robust to the structural destruction. However, high-order DNN encoded more structural information. + +Adversarial robustness. Ren et al. (2021b) have demonstrated that adversarial attacks mainly affected high-order interactions. Therefore, we conducted experiments to train DNNs mainly encoding high-order interactions based on the proposed losses, in order to verify whether such DNNs were more vulnerable to adversarial attacks. To train DNNs mainly encoding high-order interactions, we set $\lambda_1 = 1, \lambda_2 = 1$ . Specifically, the DNN was trained to encourage interactions of the $[0.6n, n]$ -th orders by setting $r_1 = 0.6, r_2 = 1$ for $L^{+}(r_1, r_2)$ and simultaneously penalize interactions of the $[0, 0.5n]$ -th orders by setting $r_1 = 0, r_2 = 0.5$ for $L^{-}(r_1, r_2)$ . We used the aforementioned MLP-5 and MLP-8 networks in Section 3.1. Each MLP was trained on the census and commercial datasets, respectively. Figure 7 in the appendix shows distributions of interaction strength of these DNNs. Then, we compared the adversarial robustness between normally trained DNNs and the DNNs whose high-order interactions were boosted. We adopted the untargeted PGD attack (Madry et al., 2018) based on $L_{\infty}$ norm. We set the attack strength $\epsilon = 0.6$ with 100 steps for the census dataset, and set $\epsilon = 0.2$ with 50 steps for the commercial dataset. The step size was uniformly set to 0.01 for all attacks. Please see more details in Appendix C. Table 1 shows that DNNs with boosted high-order interactions exhibited significantly lower adversarial accuracies than normally trained DNN, especially on the census dataset. These results verified that high-order interactions were vulnerable to adversarial attacks. + +# 4 CONCLUSION + +In this paper, we have discovered and theoretically proved the representation bottleneck of DNNs, from a new perspective of the complexity of interactions encoded in DNNs. We adopted the multi-order interaction, and used the order to represent the complexity of interactions. We discovered a common phenomenon that a DNN usually encoded very simple interactions and very complex interactions, but rarely learned interactions of intermediate complexity. We have theoretically proved the underlying reason for the representation bottleneck. Furthermore, we proposed two losses to learn DNNs which encoded interactions of specific complexities. Experimental results have shown that it is not necessary for a DNN to encode or avoid encoding interactions of specific orders, in terms of classification performance. However, high-order interactions usually encode more structural information than low-order interactions, and are usually vulnerable to adversarial attacks. + +# 5 REPRODUCIBILITY STATEMENT + +This research discovered and theoretically explained the representation bottleneck phenomenon, based on the multi-order interaction. Appendix A shows the trustworthiness of the multi-order interaction, by introducing its five desirable properties and its connections with existing typical metrics in game theory. Appendix B and Section 3.2 provide proofs for all theoretical results in the paper. Section 3 and Appendix C have discussed all experimental details, including the computation of interaction strength and how to train DNNs by the proposed two losses, which ensure the reproducibility. Furthermore, the code has been released at https://github.com/Nebularaid2000/bottleneck. + +Acknowledgments. This work is partially supported by National Science and Technology Innovation 2030 Major Project of the Ministry of Science and Technology of China under Grant (2021ZD0111602), the National Nature Science Foundation of China (No. 61906120, U19B2043), Shanghai Natural Science Fundation (21JC1403800,21ZR1434600), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102). + +# REFERENCES + +Alessandro Achille and Stefano Soatto. Information dropout: Learning optimal representations through noisy computation. IEEE transactions on pattern analysis and machine intelligence, 40 (12):2897-2905, 2018. +Rana Ali Amjad and Bernhard C Geiger. Learning representations for neural network-based classification using the information bottleneck principle. IEEE transactions on pattern analysis and machine intelligence, 42(9):2225-2239, 2019. +Marco Ancona, Cengiz Oztireli, and Markus Gross. Explaining deep neural networks with a polynomial time algorithm for shapley value approximation. In International Conference on Machine Learning, pp. 272-281. PMLR, 2019. +Devansh Arpit, Stanislaw Jastrzkebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. A closer look at memorization in deep networks. In International Conference on Machine Learning, pp. 233-242. PMLR, 2017. +Xu Cheng, Chuntung Chu, Yi Zheng, Jie Ren, and Quanshi Zhang. A game-theoretic taxonomy of visual concepts in dnns. arXiv preprint arXiv:2106.10938, 2021. +Dheeru Dua, Casey Graff, et al. Uci machine learning repository. 2017. +Stanislav Fort, Paweł Krzysztof Nowak, Stanislaw Jastrzebski, and Srini Narayanan. Stiffness: A new perspective on generalization in neural networks. arXiv preprint arXiv:1901.09491, 2019. +Michel Grabisch and Marc Roubens. An axiomatic approach to the concept of interaction among players in cooperative games. International Journal of game theory, 28(4):547-565, 1999. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2019. +Joseph D Janizek, Pascal Sturmfels, and Su-In Lee. Explaining explanations: Axiomatic feature interactions for deep networks. Journal of Machine Learning Research, 22(104):1-54, 2021. +Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. + +Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, volume 25, pp. 1097-1105, 2012. +Ya Le and Xuan Yang. Tiny imagenet visual recognition challenge. CS 231N, 7(7):3, 2015. +Benjamin Lengerich, Eric P Xing, and Rich Caruana. On dropout, overfitting, and interaction effects in deep neural networks. arXiv preprint arXiv:2007.00823, 2020. +Jianxun Lian, Xiaohuan Zhou, Fuzheng Zhang, Zhongxia Chen, Xing Xie, and Guangzhong Sun. xdeepfm: Combining explicit and implicit feature interactions for recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1754-1763, 2018. +Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Proceedings of the 31st international conference on neural information processing systems, pp. 4768-4777, 2017. +Scott M Lundberg, Gabriel G Erion, and Su-In Lee. Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888, 2018. +Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018. +Guido Montúfar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. In Advances in Neural Information Processing Systems, 2014. +Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nathan Srebro. Exploring generalization in deep learning. In Advances in Neural Information Processing Systems, 2017. +Roman Novak, Yasaman Bahri, Daniel A Abolafia, Jeffrey Pennington, and Jascha Sohl-Dickstein. Sensitivity and generalization in neural networks: an empirical study. arXiv preprint arXiv:1802.08760, 2018. +Razvan Pascanu, Guido Montufar, and Yoshua Bengio. On the number of response regions of deep feed forward networks with piece-wise linear activations. arXiv preprint arXiv:1312.6098, 2013. +William Peebles, John Peebles, Jun-Yan Zhu, Alexei Efros, and Antonio Torralba. The hessian penalty: A weak prior for unsupervised disentanglement. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VI 16, pp. 581-597. Springer, 2020. +Jie Ren, Mingjie Li, Qihan Ren, Huiqi Deng, and Quanshi Zhang. Towards axiomatic, hierarchical, and symbolic explanation for deep models. arXiv preprint arXiv:2111.06206, 2021a. +Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Xu Cheng, Xin Wang, Yiting Chen, Jie Shi, and Quanshi Zhang. Game-theoretic understanding of adversarially learned features. arXiv preprint arXiv:2103.07364, 2021b. +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. doi: 10.1007/s11263-015-0816-y. +LS Shapley. Notes on the n-person game--ii: The value of an n-person game, the rand corporation, the rand corporation. Research Memorandum, 670, 1951. +Ravid Shwartz-Ziv and Naftali Tishby. Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810, 2017. +Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2014. + +Weiping Song, Chence Shi, Zhiping Xiao, Zhijian Duan, Yewen Xu, Ming Zhang, and Jian Tang. Autoint: Automatic feature interaction learning via self-attentive neural networks. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 1161-1170, 2019. +Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International Conference on Machine Learning, pp. 3319-3328. PMLR, 2017. +Mukund Sundararajan, Kedar Dhamdhere, and Ashish Agarwal. The shapley taylor interaction index. In International Conference on Machine Learning, pp. 9259-9268. PMLR, 2020. +Michael Tsang, Dehua Cheng, and Yan Liu. Detecting statistical interactions from neural network weights. In International Conference on Learning Representations, 2017. +Michael Tsang, Hanpeng Liu, Sanjay Purushotham, Pavankumar Murali, and Yan Liu. Neural interaction transparency (nit): Disentangling learned interactions for improved interpretability. Advances in Neural Information Processing Systems, 31:5804-5813, 2018. +Michael Tsang, Dehua Cheng, Hanpeng Liu, Xue Feng, Eric Zhou, and Yan Liu. Feature interaction interpretability: A case for explaining ad-recommendation systems via neural interaction detection. In International Conference on Learning Representations, 2020. +Xin Wang, Shuyun Lin, Hao Zhang, Yufei Zhu, and Quanshi Zhang. Interpreting attributions and interactions of adversarial attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1095-1104, 2021a. +Xin Wang, Jie Ren, Shuyun Lin, Xiangming Zhu, Yisen Wang, and Quanshi Zhang. A unified approach to interpreting and boosting adversarial transferability. In International Conference on Learning Representations, 2021b. +Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, and Luca Daniel. Evaluating the robustness of neural networks: An extreme value theory approach. arXiv preprint arXiv:1801.10578, 2018. +Zhiqin John Xu. Understanding training and generalization in deep learning by fourier analysis. arXiv preprint arXiv:1808.04295, 2018. +Die Zhang, Huilin Zhou, Hao Zhang, Xiaoyi Bao, Da Huo, Ruizhao Chen, Xu Cheng, Mengyue Wu, and Quanshi Zhang. Building interpretable interaction trees for deep nlp models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 14328-14337, 2021a. +Hao Zhang, Sen Li, Yinchao Ma, Mingjie Li, Yichen Xie, and Quanshi Zhang. Interpreting and boosting dropout from a game-theoretic view. In International Conference on Learning Representations, 2020. +Hao Zhang, Yichen Xie, Longjie Zheng, Die Zhang, and Quanshi Zhang. Interpreting multivariate shapley interactions in dnns. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 10877-10886, 2021b. + +# A THE MULTI-ORDER INTERACTION + +Zhang et al. (2020) proposed the multi-order interaction between input variables $i, j$ as follows: + +$$ +I ^ {(m)} (i, j) = \mathbb {E} _ {| S | \subseteq N \backslash \{i, j \}, | S | = m} \Delta v (i, j, S) +$$ + +where $\Delta v(i,j,S) = v(S\cup \{i,j\}) - v(S\cup \{i\}) - v(S\cup \{j\}) + v(S)$ . $I^{(m)}(i,j)$ denotes the interaction between variables $i,j\in N$ of the $m$ -th order, which measures the average interaction utility between variables $i,j$ under contexts of $m$ variables. It has been proven that $I^{(m)}(i,j)$ satisfies the following five desirable properties. + +- Linear property. If two independent games $u$ and $v$ are combined, i.e., for $\forall S \subseteq N$ , $w(S) = u(S) + v(S)$ , then the multi-order interaction of the combined game equals to the sum of multi-order interactions derived from $u$ and $v$ . I.e., $I_w^{(m)}(i,j) = I_u^{(m)}(i,j) + I_v^{(m)}(i,j)$ . +- Nullity property. A dummy variable $i \in N$ satisfies $\forall S \subseteq N \setminus \{i\}$ , $v(S \cup \{i\}) = v(S) + v(\{i\})$ . Then, the variable $i$ has no interactions with other variables, i.e., $\forall m, \forall j \in N \setminus \{i\}$ , $I^{(m)}(i,j) = 0$ . +- Commutativity property. $\forall i, j \in N, I^{(m)}(i, j) = I^{(m)}(j, i)$ . +- Symmetry property. Assume two variables $i, j$ are equivalent in the sense that $i, j$ have same cooperations with other variables, $\forall S \subseteq N \setminus \{i, j\}$ , $v(S \cup \{i\}) = v(S \cup \{j\})$ . Then, for any variable $k \in N$ , $I^{(m)}(i, k) = I^{(m)}(j, k)$ . +- Efficiency property. The network output of a DNN can be decomposed into the sum of interactions of different orders between different pairs of variables. + +$$ +v (N) - v (\emptyset) = \sum_ {i \in N} \mu_ {i} + \sum_ {i, j \in N, i \neq j} \sum_ {m = 0} ^ {n - 2} w ^ {(m)} I ^ {(m)} (i, j). +$$ + +where $\mu_{i} = v(\{i\}) - v(\emptyset)$ represents the independent effect of variable $i$ , and $w^{(m)} = \frac{n - 1 - m}{n(n - 1)}$ . + +# Connection with the Shapley value and the Shapley interaction index. + +Shapley value. Shapley (1951) proposed the Shapley value to measure the numerical importance of each player to the total reward in a cooperative game. The Shapley value has been widely used to explain the decision of DNNs in recent years (Lundberg & Lee, 2017; Ancona et al., 2019). Specifically, we can consider a DNN with a set of input variables $N = \{1,\dots ,n\}$ as a game. Each input variable (e.g., an image pixel or a word) is regarded as a player, and the network output $v(N)$ of all input variables can be considered as the total reward of the game. The Shapley value aims to fairly distribute the network output to each individual variable as follows: + +$$ +\phi (i) = \sum_ {S \subseteq N \backslash \{i \}} \frac {| S | ! (n - | S | - 1) !}{n !} [ v (S \cup \{i \}) - v (S) ] +$$ + +where $v(S)$ denotes the network output when we keep variables in $S$ unchanged while mask variables in $N \setminus S$ by the baseline value. The baseline value usually follows the setting in Ancona et al. (2019), which is set as the average value of the variable over different samples. In this way, $v(S \cup \{i\}) - v(S)$ represents the marginal contribution of $i$ when the variable $i$ is present w.r.t. the case when the variable $i$ is absent, given the context $S \subseteq N \setminus \{i\}$ . Then, the Shapley value of $i$ measures the average marginal contribution of $i$ over different contexts $S$ . It has been proven that the Shapely value is the unique method to fairly allocate overall reward to each player that satisfies linearity, nullity, symmetry, and efficiency properties. + +Shapley interaction index. Grabisch & Roubens (1999) proposed the Shapley interaction index $I(S)$ to measure the interaction utility between input variables in the subset $S \subseteq N$ . In particular, the Shapley interaction index between two variables $i, j \in N$ , $Index(i, j)$ , measures the change of the numerical importance (i.e., Shapley value) of $i$ by the presence or absence of $j$ . + +$$ +I n d e x (i, j) = \tilde {\phi} (i) _ {j \text {a l w a y s p r e s e n t}} - \tilde {\phi} (i) _ {j \text {a l w a y s a b s e n t}}, \tag {10} +$$ + +where $\tilde{\phi}(i)_{j\text{always present}}$ denotes the Shapley value of the variable $i$ computed under the specific condition that the variable $j$ is always present. $\tilde{\phi}(i)_{j\text{always absent}}$ is computed under the specific condition that $j$ is always absent. + +Connections with the Shapley interaction index and the Shapley value. We found that the multi-order interaction has strong connections with the Shapley interaction index and the Shapley value. + +Specifically, it has been proven that the interaction index $Index(i,j)$ between variables $i,j$ , which is closely related to the Shapley interaction index, can be decomposed into multi-order interactions as follows. + +$$ +I n d e x (i, j) = \frac {1}{n - 1} \sum_ {m = 0} ^ {n - 2} I ^ {(m)} (i, j). +$$ + +Besides, the Shapley value of the variable $i$ also can be decomposed into the multi-order interactions. + +$$ +\phi (i) = \frac {1}{n} \sum_ {m = 1} ^ {n - 1} \mathbb {E} _ {j \in N \setminus \{i \}} [ \sum_ {k = 0} ^ {m - 1} I ^ {(k)} (i, j) ] + v (i) - v (\emptyset) +$$ + +# B PROOF OF THEOREMS + +# B.1 PROOF OF THEOREM 1 + +Motivation of Theorem 1. Let $W$ denote the network parameters of the DNN. Let $L$ and $\eta$ denote the loss function and learning rate of training, respectively. Here, we consider the change of network parameters $\Delta W = -\eta \frac{\partial L}{\partial v(N)} \frac{\partial v(N)}{\partial W}$ , whose norm indicates the learning strength of the DNN. According to the efficiency property of $I^{(m)}(i,j)$ in Eq. (2), the network output $v(N)$ of a DNN can be decomposed into the sum of multi-order interactions $I^{(m)}(i,j)$ of different orders between different pairs of variables. Therefore, the change $\Delta W$ of parameters can be also decomposed into the sum of the gradient of multi-order interactions w.r.t. the parameters, i.e., $\frac{\partial I^{(m)}(i,j)}{\partial W}$ . Specifically, + +$$ +\Delta W = \Delta W _ {U} + \sum_ {m = 0} ^ {n - 2} \sum_ {i, j \in N, i \neq j} \Delta W ^ {(m)} (i, j) +$$ + +where $U = v(\emptyset) + \sum_{i\in N}\mu_i$ . And, + +$$ +\Delta W _ {U} \stackrel {\mathrm {d e f}} {=} - \eta \frac {\partial L}{\partial v (N)} \frac {\partial v (N)}{\partial U} \frac {\partial U}{\partial W}, \quad \Delta W ^ {(m)} (i, j) \stackrel {\mathrm {d e f}} {=} R ^ {(m)} \frac {\partial I ^ {(m)} (i , j)}{\partial W}. +$$ + +where $R^{(m)} = -\eta \frac{\partial L}{\partial v(N)} \frac{\partial v(N)}{\partial I^{(m)}(i,j)}$ . Based on the analysis in Section 3.2, the term $\Delta W^{(m)}(i,j) = R^{(m)} \frac{\partial I^{(m)}(i,j)}{\partial W}$ represents the strength of learning the $m$ -order interactions. + +Thus, we aim to study the strength $\Delta W^{(m)}(i,j)$ of learning the $m$ -order interaction in Theorem 1. + +Proof skeleton in Theorem 1. To theoretically explain the representation bottleneck, we prove that the strength of learning middle-order interactions (i.e., $\Delta W^{(m)}(i,j), m \approx 0.5n$ ) is much smaller than the strength of learning low-order and high-order interactions. It is because that the learning strength of middle-order interactions is an average of the learning gradients of $\Delta v(i,j,S)$ s (i.e., $\frac{\partial \Delta v(i,j,S)}{\partial W}$ ) over massive contexts $S$ , which results in cancellation of these gradients. In contrast, the learning strength of low-order interactions (high-order interactions) is the average of $\frac{\partial \Delta v(i,j,S)}{\partial W}$ s over a few contexts $S$ , which mitigates the cancellation phenomenon. Thus, DNNs are more likely to encode low-order and high-order interactions, but usually fail to encode middle-order interactions. + +Proof of Theorem 1. The $m$ -order interaction $I^{(m)}(i,j)$ between variables $i,j$ is defined as, + +$$ +\begin{array}{l} I ^ {(m)} (i, j) = \mathbb {E} _ {S \subseteq N \backslash \{i, j \}, | S | = m} \Delta v (i, j, S) \\ = \frac {1}{\binom {n - 2} {m}} \sum_ {\substack {S \subseteq N \setminus \{i, j \} \\ | S | = m}} \Delta v (i, j, S). \tag{11} \\ \end{array} +$$ + +We use $W = [W_{1}, W_{2}, \ldots, W_{K}]^{\top} \in \mathbb{R}^{K}$ to denote the network parameters. Then, based on the Eq. (11), + +$$ +\begin{array}{l} \frac{\partial I^{(m)}(i,j)}{\partial W} = \sum_{\substack{S\subseteq N\setminus \{i,j\} \\ |S| = m}}\frac{\partial I^{(m)}(i,j)}{\partial\Delta v(i,j,S)}\frac{\partial\Delta v(i,j,S)}{\partial W} \\ = \frac{1}{\binom{n - 2}{m}}\sum_{\substack{S\subseteq N\setminus \{i,j\} \\ |S| = m}}\frac{\partial\Delta v(i,j,S)}{\partial W}. \\ \end{array} +$$ + +Assume $\mathbb{E}_{i,j,S}[\frac{\partial\Delta v(i,j,S)}{\partial W} ] = 0$ . Without loss of generality, let $\sigma^2$ denote the variance of each dimension of $\frac{\partial\Delta v(i,j,S)}{\partial W}$ . Since the gradients $\frac{\partial\Delta v(i,j,S)}{\partial W}$ on different contexts are independent with each other, then we have + +$$ +\mathbb {E} _ {i, j} [ \frac {\partial I ^ {(m)} (i , j)}{\partial W} ] = \mathbf {0}, +$$ + +$$ +\operatorname {V a r} _ {i, j} [ \frac {\partial I ^ {(m)} (i , j)}{\partial w _ {k}} ] = \sigma^ {2} / \binom {n - 2} {m}, \forall k = 1, \ldots , K. +$$ + +where $K$ is the dimension of network parameters $W$ . Furthermore, because $\Delta W^{(m)}(i,j) = -\eta \frac{\partial L}{\partial v(N)} w^{(m)} \frac{\partial I^{(m)}(i,j)}{\partial W}$ , then + +$$ +\mathbb {E} _ {i, j} [ \Delta W ^ {(m)} (i, j) ] = \mathbf {0}, +$$ + +$$ +\operatorname {V a r} _ {i, j} \left[ \Delta W _ {k} ^ {(m)} (i, j) \right] = \left(\eta \frac {\partial L}{\partial v (N)} w ^ {(m)}\right) ^ {2} \sigma^ {2} / \binom {n - 2} {m}, \forall k = 1, \dots , K. +$$ + +where $w^{(m)} = \frac{n - m - 1}{n(n - 1)}$ , and $\Delta W_k^{(m)}(i,j) = -\eta \frac{\partial L}{\partial v(N)} w^{(m)} \frac{\partial I^{(m)}(i,j)}{\partial w_k}$ represents the $k$ -th dimension of $\Delta W^{(m)}(i,j)$ . Moreover, we can obtain, + +$$ +\begin{array}{l} \mathbb {E} _ {i, j} [ \| \Delta W ^ {(m)} (i, j) \| _ {2} ^ {2} ] = \mathbb {E} _ {i, j} [ \sum_ {k = 1} ^ {K} \Delta W _ {k} ^ {(m)} (i, j) ^ {2} ] = \sum_ {k = 1} ^ {K} \mathbb {E} _ {i, j} [ \Delta W _ {k} ^ {(m)} (i, j) ^ {2} ] \\ = \sum_ {k = 1} ^ {K} \left[ \left(\mathbb {E} _ {i, j} \left[ \Delta W _ {k} ^ {(m)} (i, j) \right]\right) ^ {2} + \operatorname {V a r} _ {i, j} \left[ \Delta W _ {k} ^ {(m)} (i, j) \right] \right] \tag {12} \\ = \sum_ {k = 1} ^ {K} \operatorname {V a r} _ {i, j} \left[ \Delta W _ {k} ^ {(m)} (i, j) \right] \\ = K (\eta \frac {\partial L}{\partial v (N)} \frac {n - m - 1}{n (n - 1)}) ^ {2} \sigma^ {2} / \binom {n - 2} {m}. \\ \end{array} +$$ + +Therefore, the conclusion of Theorem 1 holds. + +# B.2 PROOF OF THEOREM 2 + +Let $S_{1}, S_{2}$ denote the two variable subsets randomly sampled from the universal set $N$ including all input variables, where $|S_{1}| = r_{1}n$ , $|S_{2}| = r_{2}n$ , and $0 \leq r_{1} < r_{2} \leq 1$ . + +When we consider each $S_{1}$ as the universal set, according to the efficiency property of the multi-order interaction, we can obtain, + +$$ +\begin{array}{l} \mathbb {E} _ {S _ {1}} [ v (S _ {1}) ] = v (\emptyset) + \mathbb {E} _ {S _ {1}} [ \sum_ {i \in S _ {1}} \mu_ {i} ] + \mathbb {E} _ {S _ {1}} [ \sum_ {i, j \in S _ {1}, i \neq j} [ \sum_ {m = 0} ^ {r _ {1} n - 2} \frac {r _ {1} n - 1 - m}{r _ {1} n (r _ {1} n - 1)} I _ {S _ {1}} ^ {(m)} (i, j) ] ] \\ = v (\emptyset) + r _ {1} n \mathbb {E} _ {i} (\mu_ {i}) + \mathbb {E} _ {S _ {1}} [ \sum_ {m = 0} ^ {r _ {1} n - 2} \frac {r _ {1} n - 1 - m}{r _ {1} n (r _ {1} n - 1)} \sum_ {i, j \in S _ {1}, i \neq j} I _ {S _ {1}} ^ {(m)} (i, j) ] \\ = v (\emptyset) + r _ {1} n \mathbb {E} _ {i} (\mu_ {i}) + \sum_ {m = 0} ^ {r _ {1} n - 2} (r _ {1} n - 1 - m) \mathbb {E} _ {S _ {1}} [ \mathbb {E} _ {i, j} [ (I _ {S _ {1}} ^ {(m)} (i, j)) ] ] \\ \end{array} +$$ + +where $I_{S_1}^{(m)}(i,j)\stackrel {\mathrm{def}}{=}\mathbb{E}_{S'\subseteq S_1\setminus \{i,j\},|S'| = m}(\Delta v(i,j,S'))$ + +Similarly, when we consider each $S_{2}$ as the universal set, we can obtain, + +$$ +\begin{array}{l} \mathbb {E} _ {S _ {2}} [ v (S _ {2}) ] = v (\emptyset) + \mathbb {E} _ {S _ {2}} [ \sum_ {i \in S _ {2}} \mu_ {i} ] + \mathbb {E} _ {S _ {2}} [ \sum_ {i, j \in S _ {2}, i \neq j} [ \sum_ {m = 0} ^ {r _ {2} n - 2} \frac {r _ {2} n - 1 - m}{r _ {2} n (r _ {2} n - 1)} I _ {S _ {2}} ^ {(m)} (i, j) ] ] \\ = v (\emptyset) + r _ {2} n \mathbb {E} _ {i} (\mu_ {i}) + \sum_ {m = 0} ^ {r _ {2} n - 2} (r _ {2} n - 1 - m) \mathbb {E} _ {S _ {2}} [ \mathbb {E} _ {i, j} [ (I _ {S _ {2}} ^ {(m)} (i, j)) ] ] \\ \end{array} +$$ + +![](images/56c3efdc88bb5ba34e6c59db5c6f7c1095596d459ed71f42a3b3bda6161bc9b8.jpg) + +![](images/e0e244c04cb3515d4a659f54af867e56d22df2aca99d2d67a927f4d9fe0da8f6.jpg) + +![](images/5fac052bd444938be1336a63d074377ca8ec30d249b4c66f12f26e1520f050bb.jpg) + +![](images/4e4162122960415a0df66832f7c3890a2a7f04815d986408722c23809991cb7f.jpg) + +![](images/e1220e7ec3dd1c7008d88977354489a6f446acdd54f2eeeb8fa223db0c4e8e48.jpg) +Figure 7: (a) Distributions of the interaction strength $J^{(m)}$ of normal DNNs and high-order DNNs, where high-order DNNs were trained by encouraging high-order interactions and penalizing low-order interactions simultaneously. (b) The instability of $J^{(m)}(x)$ w.r.t. the sampling number of pairs of variables $(i,j)$ and the sampling number of contexts $S$ . + +where $I_{S_2}^{(m)}(i,j) \stackrel{\mathrm{def}}{=} \mathbb{E}_{S' \subseteq S_2 \setminus \{i,j\}, |S'| = m}(\Delta v(i,j,S'))$ . Note that the contexts when computing $I_{S_1}^{(m)}(i,j)$ and $I_{S_2}^{(m)}(i,j)$ are different. It is easy to obtain that + +$$ +\mathbb {E} _ {S _ {1}} [ \mathbb {E} _ {i, j} (I _ {S _ {1}} ^ {(m)} (i, j)) ] = \mathbb {E} _ {S _ {2}} [ \mathbb {E} _ {i, j} (I _ {S _ {2}} ^ {(m)} (i, j)) ] = \mathbb {E} _ {i, j} (I ^ {(m)} (i, j)). +$$ + +When $m < r_1n < r_2n$ . Therefore, $\Delta u(r_1,r_2)$ can be rewritten as follows: + +$$ +\begin{array}{l} \Delta u (r _ {1}, r _ {2}) = \mathbb {E} _ {(S _ {1}, S _ {2}): \emptyset \subseteq S _ {1} \subsetneq S _ {2} \subseteq N} [ v (S _ {2}) - r _ {2} / r _ {1} \cdot v (S _ {1}) ] \\ = \mathbb {E} _ {S _ {2}} \left[ \mathbb {E} _ {S _ {1}} \left[ v (S _ {2}) - r _ {2} / r _ {1} \cdot v (S _ {1}) \right] \right] \\ = \mathbb {E} _ {S _ {2}} [ v (S _ {2}) ] - r _ {2} / r _ {1} \mathbb {E} _ {S _ {1}} [ v (S _ {1}) ] \\ = (1 - r _ {2} / r _ {1}) v (\emptyset) + \sum_ {m = 0} ^ {n - 2} \sum_ {i, j \in N, i \neq j} \tilde {w} ^ {(m)} I ^ {(m)} (i, j) \\ \end{array} +$$ + +$$ +\text {w h e r e} \quad \tilde {w} ^ {(m)} = \left\{ \begin{array}{l l} (r _ {2} / r _ {1} - 1) (m + 1) / [ n (n - 1) ], & m \leq r _ {1} n - 2 \\ (r _ {2} n - m - 1) / [ n (n - 1) ], & r _ {1} n - 2 < m \leq r _ {2} n - 2 \\ 0, & r _ {2} n - 2 < m \leq n - 2 \end{array} \right. +$$ + +Then, the conclusion holds. + +# C EXPERIMENTAL DETAILS AND MORE RESULTS + +# C.1 IMPLEMENTATION DETAILS + +The experiments were conducted on the CIFAR-10, Tiny/ImageNet, ImageNet, and two tabular datasets. Due to the computational cost, we selected 50 classes from 200 classes at equal intervals (i.e., the 4th, 8th, ..., 196th, 200th classes) when we trained DNNs on the Tiny/ImageNet dataset. + +The sampling strategy in the computation of $J^{(m)}$ . The interaction strength $J^{(m)}$ is defined as, + +$$ +J ^ {(m)} = \frac {\mathbb {E} _ {x \in \Omega} [ \mathbb {E} _ {i , j} [ | I ^ {(m)} (i , j | x) | ] ]}{\mathbb {E} _ {m ^ {\prime}} [ \mathbb {E} _ {x \in \Omega} [ \mathbb {E} _ {i , j} [ | I ^ {(m ^ {\prime})} (i , j | x) | ] ] ] ]}, \text {w h e r e} I ^ {(m)} (i, j | x) = \mathbb {E} _ {S \subseteq N \setminus \{i, j \} \atop | S | = m} [ \Delta v (i, j, S) ]. +$$ + +To precisely compute $J^{(m)}$ , we need to average all possible contexts $S \subseteq N$ , all pairs of variables $i, j \in N$ , and all samples $x \in \Omega$ , which is usually computationally infeasible. Therefore, we adopted the sampling strategy used in (Zhang et al., 2020) to approximately compute $J^{(m)}$ . + +The sampling strategy was conducted as follows. With respect to the sampling number of input samples, we sampled 50 correctly classified samples on image datasets. On the ImageNet and the Tiny-ImageNet dataset, these images were sampled from different 50 classes. On the CIFAR-10 dataset, we sampled 5 images from each class. Then, for each image, we sampled 200 pairs of patches $i,j\in N$ . Since DNNs usually encode stronger interactions between neighbor patches, we restricted that patch $i$ should be located at the neighborhood patch $j$ with a radius of two patches. Next, for each pair of patches $i,j$ , and each order $m$ , we randomly sampled 100 contexts $S$ from all possible contexts, where $|S| = m$ . Besides, we computed 13 different orders for $J^{(m)}$ , where $m = 0,0.05n,0.1n,0.2n,\dots,0.8n,0.9n,0.95n,1.0n$ . On each tabular dataset, since there are only + +![](images/1ace99fecf726be6b689e196ee03ee5174c52f094211c166223fda5f1f9b1436.jpg) +Figure 8: Distributions of the interaction strength $J^{(m)}$ of four types of DNNs. The low-order, middle-order, and high-order DNNs were trained by following the parameter setting mentioned in Section 3.4, while $\lambda_{2}$ was set as 0.1 (instead of 1) for low-order DNNs on the Tiny-ImageNet dataset. + +![](images/85b5d7667da1dabd5ad906d549185bcf493e616be856aebe40be60fe49d30f1e.jpg) + +![](images/31d9eda4103b2cc81fe101715306c51d5b6bad7209a877dbaed25ec5dd7a698b.jpg) + +![](images/d9dacde34860860e5e08227085aa937435b25dfa1db06159d077324fc6ed3491.jpg) + +![](images/e43da26ddee8292dab6bbd57514df87340e70c56912d067e8e69e634666524d5.jpg) +Figure 9: Classification accuracies using AlexNet architecture on images with different numbers of patches being masked. + +![](images/875e884b2386b21241ef4d5eb4255c13cd374c06bdc9a6fee316b9d77b9e667e.jpg) + +a few input variables ( $n = 12$ on the census dataset and $n = 10$ on the commercial dataset), we sampled 100 instances and all pairs of variables for computation. For each pair of patches and each order $m$ , we randomly sampled 100 contexts $S$ from all possible contexts. + +To validate the reliability of the approximated $J^{(m)}$ via the above sampling strategy, we evaluated the (in)stability of $J^{(m)}$ during the sampling process. Specifically, we designed an instability metric when $J^{(m)}$ was repeatedly computed for $q$ times. Firstly, for each input image $x$ , we defined $J^{m}(x) \stackrel{\mathrm{def}}{=} \frac{\mathbb{E}_{i,j} [|I^{(m)}(i,j|x)|]}{\mathbb{E}_{m'} \mathbb{E}_{i,j} [|I^{(m)}(i,j|x)|]}$ . The instability w.r.t. the $J^{(m)}(x)$ was computed as $\frac{\mathbb{E}_{u,v;u \neq v} |J_u^{(m)}(x) - J_v^{(m)}(x)|}{\mathbb{E}_w |J_w^{(m)}(x)|}$ , where $J_u^{(m)}(x), J_v^{(m)}(x)$ denote the estimated $J^{(m)}(x)$ at the $u$ -th sampling time and at the $v$ -th sampling time, respectively. Then, the instability of $J^{(m)}$ was computed as follows. + +$$ +\text {i n s t a b i l i t y} = \mathbb {E} _ {x \in \Omega} \mathbb {E} _ {m} [ \frac {\mathbb {E} _ {u , v ; u \neq v} | J _ {u} ^ {(m)} (x) - J _ {v} ^ {(m)} (x) |}{\mathbb {E} _ {w} | J _ {w} ^ {(m)} (x) |} ]. +$$ + +The instability is an average over all sampled images and all sampled orders. + +Based on the above definition, we used the trained AlexNet on the Tiny-ImageNet dataset and VGG-16 on the CIFAR-10 dataset to compute the above instability. We conducted two experiments to evaluate the instability of $J^{(m)}$ w.r.t the sampling of the contexts $S$ and the sampling of pairs $(i,j)$ , respectively. In the first experiment, we fixed 100 pairs of $(i,j)$ and evaluated the instability of $J^{(m)}$ w.r.t the sampling of the contexts $S$ . Figure 7(b) shows that on the two datasets, when the sampling number of $S$ increased, the instability decreased. Furthermore, when the sampling number of $S$ was greater than 100 (i.e., our setting), the instability value was less than 0.05, which indicated a stable approximation. In the second experiment, we evaluated the instability w.r.t the sampling processes of $(i,j)$ , where the sampling number of $S$ was set to 100 as above-mentioned. As Figure 7(b) shows, when the sampling number of $(i,j)$ pairs increased, the instability decreased. When the sampling number of $(i,j)$ pairs was greater than 200 (i.e., our setting), the instability was less than 0.1, which indicated a stable approximation of $J^{(m)}$ . These results demonstrated that the adopted sampling strategy could well approximate the interaction strength $J^{(m)}$ . + +Implementation details of adversarial attacks. Here, we introduce how to measure the adversarial robustness in Section 3.4. We adopted the untargeted PGD attack (Madry et al., 2018) with the $L_{\infty}$ constraint $\| \Delta x \|_{\infty} \leq \epsilon$ to generate adversarial examples. For the census dataset, we set $\epsilon = 0.6$ and + +![](images/c2d54ca1a7ec49d697b9daf9cc4b69eddf1231e5f6634799739176ff4aa936a2.jpg) +Figure 10: Images generated by random masking (left) and randomly-surrounding masking (right). + +![](images/a7661d14945e713ac647dc1967b57a2c7d4ef7b2b02359989c4157fcc89775a7.jpg) + +![](images/af10c08ea6921e1ded93f0b6258971434995cd31e6e03b9680cfc0ec6c14eba6.jpg) +Figure 11: Classification accuracies using VGG-16 architecture on images with different numbers of patches being masked (by random masking and randomly-surrounding masking). + +![](images/063d4da5c6af8d1f49eb6fefa4f6f5330d4af2f38b3122dccb6cba52756c4c73.jpg) + +![](images/dd3bd5a4216145061dfac21031d69797bb91e01d46d65765012df55bf3d25eb3.jpg) + +the attack was conducted with 100 steps. For the commercial dataset, we set $\epsilon = 0.2$ and the attack was conducted with 50 steps. The step size was set to 0.01 for all attacks. + +# C.2 MORE EXPERIMENTAL RESULTS. + +In this subsection, we provide more experimental results besides the results in the main paper. + +# C.2.1 MORE EXPERIMENTS ON THE EFFECTIVENESS OF LOSSES + +Except for the AlexNet architecture, we also used the proposed losses to train four types of DNNs of VGG-16 and VGG-19 architectures, i.e., normally trained DNNs, low-order DNNs, middle-order DNNs, and high-order DNNs, on the CIFAR-10 dataset and the Tiny-ImageNet dataset. The parameters were set as the same in Section 3.4. Note that when $r_1 = 0$ , we set $\Delta u(r_1, r_2) = \mathbb{E}_{S_2}[v(S_2)] - v(\emptyset)$ . Experimental results in Figure 8 demonstrated that the four types of DNNs usually could successfully learn interactions as expected, which further validated the effectiveness of the proposed losses. + +# C.2.2 MORE EXPERIMENTS ON STRUCTURAL REPRESENTATION + +Verify structural representations on more DNNs. Figure 9 shows classification accuracies using AlexNet network on images with different numbers of patches being masked. These results further validated that high-order DNNs were more sensitive to the destruction of structural information. + +More masking methods to verify structural representations. Besides, we designed a new masking method to further investigate a DNN's capacity of encoding structural information. Specifically, we masked $m$ patches in each image and only preserved a single large region of $n - m$ patches, as shown in Figure 10. The position of the preserved region was randomly determined, instead of being fixed in the center of the image previously. We called the new masking method as randomly-surrounding masking method. By doing so, the structural information within the large region was maintained. + +Figures 11 and 12 show experimental results. In each sub-figure, the $x$ -axis represents the ratio of preserved patches, and the $y$ -axis represents the classification accuracy when only the masked images were fed as input. In addition, the blue curve denotes the accuracy curve on the samples generated by the random masking method, and the orange curve shows the accuracy on the samples generated by the randomly-surrounding masking method. The area between the blue curve and the orange curve measures the sensitivity of the DNN to the structural destruction. + +Based on the accuracy on masked samples in Figures 11 and 12, we found that the aforementioned area of the high-order DNN was much larger than the area of the normally trained DNN. The experimental results verified that high-order DNNs encoded more structural information than normally trained DNNs. Such results were also consistent with results obtained in our previous experiments in Figures 6 and 9. + +![](images/27debf589e500515ba4ba4f3d64093c761b66b419e9768d91920b41a3100bdb9.jpg) +Figure 12: Classification accuracies using AlexNet architecture on images with different numbers of patches being masked (by random masking and randomly-surrounding masking). + +![](images/d359d4d8b287121d69b1c8c801e732dfaa4ade7d730a63bd6b0172061513e683.jpg) + +![](images/1468233c62e0960604c4a9086e8c120579fc0f71dc67d266a8f479af00af87b0.jpg) + +![](images/a8734b9ed2c4d2a2d4f76c9822e952087e1c073bcd0e80974ee11dcd5827d293.jpg) + +![](images/97572b2013fbb1794663b3cec604a2da34e9b671b9db7a58cb4371481b61a26c.jpg) + +![](images/2c4f17e0df6a4d4fc4fde83885f84711f219a3c7bb6d3e221168d1ce9ed0c449.jpg) + +![](images/f4156edf6239668e31af4882cb7cc0a049fea955e72c756fd4c310cf69b2a4bd.jpg) + +![](images/253896cef19d6e85a93e5e8a246b8307482fdff6600b798098c83907d3cac2d6.jpg) + +![](images/d2a504663a7081533c5236456afc31887f87b796773dc9798d803572afcb8039.jpg) +Figure 13: Classification accuracies of low-order DNNs, middle-order DNNs, high-order DNNs, and normally trained DNNs (using VGG16) on images with different numbers of patches being masked. Here, we used the random masking method and the centrally-surrounding masking method. The blue color indicated that the random masking method had a lower accuracy, while the red color indicated that the random masking method had a higher accuracy. The phenomenon that the random masking method had a higher accuracy may be because the random masking method preserved more abundant local information in generated images, while the centrally-surrounding masking method will make the unmasked regions concentrate together. Considering that adjacent regions often contain similar feature information, the centrally-surrounding masking method will lead to redundancy of information and limit the diversity of information. + +![](images/0d21f10506c0b2b78d18706e7ec328e0dd85aee3beddad43c4928811687633e5.jpg) + +![](images/2ff739885c2f41d414092d8f370269651ca71c16f3365b2e8510938d635de4a3.jpg) + +![](images/8388b232488e4dc27b2d5787e21ab1adb4c10002e9c5f90b14978815fcd69c90.jpg) + +Comparisons of structural representations between four types of DNNs. We further compared the capacity of encoding structural representations among four types of DNNs (i.e., normally trained DNNs, low-order DNNs, middle-order DNNs, and high-order DNNs). To this end, we followed the experimental setting in Section 3.4. We trained low-order DNNs by penalizing interactions of the $[0.7n, n]$ -th orders based on $L^{-}(r_{1}, r_{2})$ loss, and trained middle-order DNNs by boosting interactions of the $[0.3n, 0.7n]$ -th orders based on $L^{+}(r_{1}, r_{2})$ loss. + +Figure 13 compares classification accuracies on images generated by the random masking method (which were considered as bag-of-words features without structures) and images generated by the centrally-surrounding masking method (with structural patterns). The large gap between the two classification accuracies indicated that the learned DNN encoded rich structural information for inference. I.e., the larger area between the accuracy curve on images generated by the random masking method and the accuracy curve on images generated by the centrally-surrounding masking method means the richer structural information in the DNN. In this way, Figure 13 compares the capacity of encoding structural information between four types of VGG-16 networks. Similarly, Figure 14 compares the capacity of encoding structural information between four types of AlexNets. + +To this end, we obtained three conclusions. + +(i) Low-order DNNs trained on the CIFAR-10 dataset encoded little structural information, because such DNNs did not exhibit much difference in classification accuracy on the above two types of + +![](images/f6b6feb024f0a366d9967974e63ffd35ec81dba6f0a99ad491b55819dd8b6561.jpg) + +![](images/2ee0bb848e2eaf52b0c6aaf34622a30fc5fb6d60520aa090fdb30b395d881f0c.jpg) + +![](images/87f659e4f7ee514b2775d55de50ddc9f741afb90f3a6f6a3006d5aa6561c502e.jpg) + +![](images/0c802ac5b3db414ed49501b7fced806ed99fbc884b481fb94606348467fd2bc3.jpg) + +![](images/f3132e99edd69e7004e338645b730da18eda2cb6c2cb26d3d96a211669f1b5a3.jpg) +Figure 14: Classification accuracies of low-order DNNs, middle-order DNNs, high-order DNNs, and normally trained DNNs (using AlexNet) on images with different numbers of patches being masked. Here, we used the random masking method and the centrally-surrounding masking method. The blue color indicated that the random masking method had a lower accuracy, while the red color indicated that the random masking method had a higher accuracy. The phenomenon that the random masking method had a higher accuracy may be because the random masking method preserved more abundant local information in generated images, while the centrally-surrounding masking method will make the unmasked regions concentrate together. Considering that adjacent regions often contain similar feature information, the centrally-surrounding masking method will lead to redundancy of information and limit the diversity of information. + +![](images/44a8d5bf57642d37f3de0fb4c579a847016ecd367b267d812bf0f03358db4f4e.jpg) + +![](images/3fff350d31a85cc5bb1ff41c095a314dcecbd472fe5929d9a1bfbce979c96c28.jpg) + +![](images/c46c4be844e7821330eb76b031444dcffeeab4792f1074549283b7f238a72625.jpg) +Figure 15: The multi-order interaction $I_{nor}^{(m)}$ on normal samples and $I_{adv}^{(m)}$ on adversarial samples for normally trained DNNs. Adversarial perturbations mainly affected high-order interactions. + +![](images/d1a0b3732093aeb6dfb4c94ea6a604c87e96d0aa3a9783ad6191999b293370b3.jpg) + +![](images/3218218cfcb2743ea7415fe9a17a9a19fad599d27f3565b42a69b61b64c42637.jpg) + +samples. Furthermore, for low-order DNNs trained on the Tiny-ImageNet dataset, the accuracies on images generated by the random masking method were even higher than the accuracies on images generated by the centrally-surrounding masking method. This may be because the random masking method preserved more abundant local information in generated images, while the centrally-surrounding masking method will make the unmasked regions concentrate together. Considering that adjacent regions often contain similar feature information, the centrally-surrounding masking method will lead to redundancy of information and limit the diversity of information. This result further verified that low-order DNNs preferred bag-of-words representations, but failed to encode structural information. + +(ii) High-order DNNs showed the highest accuracy difference between the above two types of samples, which indicated that high-order DNNs encoded most structural information. +(iii) The capacity of Middle-order DNNs to encode structural information was somewhere in between low-order DNNs and high-order DNNs. It indicated that middle-order DNNs encoded more structural information than low-order DNNs, but middle-order DNNs encoded less structural information than high-order DNNs. + +Experimental results demonstrated that it was the high-order interaction that was mainly responsible for encoding structural information. + +![](images/105bbed3038188dd7fe1bd2d9dcbbffea4504b00fd7c5bfa7fa06959c69d8861.jpg) +(a) Distribution of $J^{(m)}$ using multiple $L^{+}, L^{-}$ , $(r_{1}, r_{2})$ + +![](images/fece04bf36f0fe051c633c3fb5332362c8ed35bf361d6e575db53dc39f1ed65a.jpg) + +![](images/027a359bf8304779e4566e119a901e4e183eeff0e89fd39420facaa126d181e4.jpg) +Penalizing [0, 0.5n] +(b) Effects of $\lambda_{2}$ + +![](images/184875d64767322dde656b7b528315ef92fe1bad1d65965596be4119c3fb8437.jpg) +Penalizing $[0.7n, n]$ + +![](images/80d1f1439f6b702f0b973c7e7ab6aad102c58d5ee666af6336ff9d30954ca1a4.jpg) +Figure 16: (a) Distributions of the interaction strength $J^{(m)}$ of normally trained DNN and DNNs trained by using multiple $L^{+}$ , $L^{-}$ , and $(r_1, r_2)$ pairs. (b) Distributions of the interaction strength $J^{(m)}$ of DNNs trained by using different $\lambda_{2}$ s. +(a) Distribution of $J^{(m)}$ (census dataset) +Figure 17: Distributions of the interaction strength $J^{(m)}$ of normally trained DNN and DNNs trained with $L^{+}$ and/or $L^{-}$ w.r.t. different pairs of $(r_1, r_2)$ . + +![](images/ccb47425da2d9885bee2302d5858dfca2053b6b3397e1346195fea920ef2b905.jpg) + +![](images/c2d08db62c9eb48abeed5dee9dd414249e5f4571a4dd44f05dded595e4a601d8.jpg) +(a) Distribution of $J^{(m)}$ (commercial dataset) + +![](images/22081c1f6234038ebc5c9a30093d46e20bcee7eecd60164df7a576dfdc4106bd.jpg) + +# C.2.3 MORE EXPERIMENTS ON ADVERSARIAL ROBUSTNESS + +Comparisons on adversarial robustness between four types of DNNs. We further compared the adversarial robustness between four types of DNNs, including low-order DNNs, middle-order DNNs, high-order DNNs, and normally trained DNNs. To this end, we followed the experimental setting in Section 3.4. Here, the low-order DNNs were trained by encouraging interactions of the $[0, 0.5n]$ -th orders based on $L^{+}(r_{1}, r_{2})$ and simultaneously penalizing interactions of the $[0.6n, n]$ -th orders based on $L^{-}(r_{1}, r_{2})$ . We set $\lambda_{1} = 10$ , $\lambda_{2} = 10$ . The middle-order DNNs were trained to encourage interactions of the $[0.3n, 0.7n]$ -th orders based on $L^{+}(r_{1}, r_{2})$ . We set $\lambda_{1} = 10$ , $\lambda_{2} = 0$ . We used the MLP-5 and MLP-8 networks in Section 3.1. Each MLP was trained on the census dataset and commercial dataset, respectively. Adversarial attacks were conducted by following experimental settings in Section 3.4. + +Table 2 reports the adversarial robustness of four types of DNNs. We obtained following results. (i) Low-order DNNs usually exhibited the highest adversarial robustness, which outperformed normally trained DNNs. (ii) High-order DNNs exhibited the lowest robustness, which was significantly lower than normally trained DNNs. (iii) The robustness of middle-order DNNs was somewhere in between low-order and high-order DNNs. The above results showed a significant connection between adversarial robustness and the orders of the encoded interactions. + +More explorations on Adversarial robustness. We further explored the relationship between adversarial robustness and the order of interactions. We conducted experiments in Ren et al. (2021b) on images in the ImageNet dataset and the Tiny-ImageNet dataset, in order to explore interactions of which orders were vulnerable to adversarial attacks. + +Specifically, let $x$ denote a normal sample, and let $x' = x + \epsilon$ denote an adversarial sample. Then, according to the efficiency property of $I^{(m)}(i,j)$ , the significant output change caused by adversarial perturbations, $v(N|x) - v(N|x')$ , can be represented as the sum of differences in multi-order interactions as follows. + +$$ +v (N | x) - v (N | x ^ {\prime}) = \text {i g n o r a b l e} \quad \text {b i a s} + \sum_ {m} w ^ {(m)} \sum_ {i, j} \Delta I ^ {(m)} (i, j), +$$ + +where $\Delta I^{(m)}(i,j) = I^{(m)}(i,j|x) - I^{(m)}(i,j|x')$ is the compositional adversarial effect on the interaction utility $I^{(m)}(i,j|x)$ caused by adversarial perturbations. In this way, the overall attacking utility (the output change) can be decomposed into adversarial effects on massive compositional interactions $I^{(m)}(i,j|x)$ . + +![](images/0b791f296f10c4bfd7cdf3e580613c016a04bca47cff562ca5d9480a8387f4af.jpg) + +![](images/7b278ffc59437727bb7334f1c573739532f5ccfd1aa0bc486da4e3d4b4bcce12.jpg) +Figure 18: Distributions of the interaction strength $J^{(m)}$ of DNNs trained by using different pairs $(\lambda_1, \lambda_2)$ on the census dataset and the commercial dataset. + +Table 2: Comparison of the adversarial accuracy between normally trained DNNs, low-order DNNs, middle-order DNNs, and high-order DNNs on the census dataset and the commercial dataset. + +
ModelNormal trainingpenalize high-order & boost low-orderboost middle-orderPenalize low-order & boost high-order
MLP-5 on census38.2242.4015.937.31
MLP-8 on census39.3344.6518.102.02
MLP-5 on commercial27.0129.7623.7022.00
MLP-8 on commercial25.9228.8622.5520.58
+ +Therefore, Figure 15 uses the change of multi-order interactions $\Delta I^{(m)}(i,j)$ as specific reasons for adversarial vulnerability. This figure shows the difference between interactions of normal samples and interactions of adversarial samples. Adversarial perturbations mainly affected high-order interactions, and rarely affected low-order interactions. This phenomenon has been showed in some previous studies. The above experiments further verified the connection between adversarial robustness and the order of the encoded interactions. + +In summary, experimental results showed a strong connection between adversarial robustness and the order of the encoded interactions. Nevertheless, the order of interactions does not fully determine the adversarial robustness. For example, previous studies have shown that interactions encoded in adversarially trained DNNs were also more robust to attacks. + +# C.2.4 MORE EXPERIMENTS ON DIFFERENT HYPER-PARAMETERS + +Training DNNs with both $L^{+}$ and $L^{-}$ w.r.t. different pairs of $(r_1, r_2)$ . We investigated the performances of DNNs when we simultaneously used multiple $L^{+}$ and $L^{-}$ with different pairs of $(r_1, r_2)$ in the loss function. We used the following five sets of experimental settings, including (i) $L^{+}(0, 0.3)$ and $L^{+}(0.7, 1.0)$ ; (ii) $L^{+}(0.3, 0.7)$ and $L^{+}(0.7, 1.0)$ ; (iii) $L^{+}(0.2, 0.4)$ and $L^{+}(0.6, 0.8)$ ; (iv) $L^{-}(0, 0.5)$ and $L^{+}(0.6, 1)$ ; (v) $L^{+}(0, 0.5)$ and $L^{-}(0.6, 1)$ to learn DNNs. DNNs were trained on the CIFAR-10 dataset, the census dataset, and the commercial dataset, respectively. + +Figure 16(a) shows that the trained DNNs successfully learned interactions as expected. For example, the DNN trained with $L^{+}(0,0.3)$ & $L^{+}(0.7,1.0)$ simultaneously boosted both interactions of the $[0,0.3n]$ -orders and interactions of the $[0.7n,n]$ -orders. These results further verified the effectiveness of the proposed losses. + +Furthermore, DNNs trained with $L^{+}$ and/or $L^{-}$ w.r.t. different pairs of $(r_1, r_2)$ further verified the conclusion obtained in Table 1 (left). I.e., although the interactions of different orders performed differently in both adversarial robustness (Table 1 (right)) and the capacity of encoding structural information (Figures 6, 9, 13, and 14), interactions of different orders had similar classification accuracies (the difference was within ±1% accuracy). It showed that although interactions of different orders had their distinctive properties, such difference did not necessarily affect their classification accuracies on normal images. + +Effects of hyper-parameters. We further investigated the effects of different hyper-parameters. + +First, the effects of a single $(r_1,r_2)$ pair have been investigated. Figure 4 (b) and Figure 4 (c) show that using the $L^{+}(r_{1},r_{2}) / L^{-}(r_{1},r_{2})$ loss boosted/penalized interactions of the $[r_1n,r_2n]$ -th orders. + +Second, in spite of that, we conducted new experiments to further investigate the effects of multiple $(r_1,r_2)$ pairs. We used the following five sets of experimental settings, including (i) $L^{+}(0,0.3)$ + +Table 3: Comparison of the running time between normally training DNNs by 200 epochs and training DNNs by 200 epochs with the proposed losses. + +
CIFAR-10Tiny-ImageNet
ModelAlexNetVGG16AlexNetVGG16
Normally training DNNs0.46 h0.83 h0.89 h9.25 h
Training using L+(r1,r2)1.36 h2.35 h1.87 h27.28 h
Training using L-(r1,r2)1.27 h2.12 h1.82 h26.68 h
+ +![](images/4ea3f60c25ac01deab95d684d9a6d3cf5f65f81feab107ffa05310acd63c91f7.jpg) +Figure 19: Curves of classification loss $\mathrm{Loss}_{\mathrm{classification}}$ w.r.t. training epochs for normally training DNNs and training DNNs with the proposed losses. + +![](images/acc49a7ec60f1f6c84464c49351a65cac3d45076bf5e6a7a17de6c30dd11d1c3.jpg) + +and $L^{+}(0.7,1.0)$ ; (ii) $L^{+}(0.3,0.7)$ and $L^{+}(0.7,1.0)$ ; (iii) $L^{+}(0.2,0.4)$ and $L^{+}(0.6,0.8)$ ; (iv) $L^{-}(0,0.5)$ and $L^{+}(0.6,1)$ ; (v) $L^{+}(0,0.5)$ and $L^{-}(0.6,1)$ . We used these parameters to train DNNs on the CIFAR-10 dataset. Figure 16(a) shows that $L^{+}(r_{1},r_{2})$ and $L^{+}(r_1',r_2')$ loss simultaneously boosted interactions of the $[r_1n,r_2n]$ -th orders and the $[r_1'n,r_2'n]$ -th orders. For example, by using $L^{+}(0,0.3)$ and $L^{+}(0.7,1.0)$ , the trained DNN simultaneously boosted both interactions of the $[0,0.3n]$ -th orders and interactions of the $[0.7n,n]$ -th orders. + +In addition, we conducted experiments on two tabular datasets, i.e., the census dataset and the commercial dataset. We extended above five different experimental settings (w.r.t. $r_1$ and $r_2$ ) for image data to the training of DNNs in tabular data. Thus, we trained five MLP-5 networks on each tabular dataset following each experimental setting. The architecture of the MLP-5 has been introduced in Section 3.1. Figure 17 shows that we could use $L^{+}(r_{1},r_{2})$ and $L^{+}(r_1',r_2')$ loss to simultaneously boost interactions of the $[r_1n,r_2n]$ -th orders and interactions of the $[r_1'n,r_2'n]$ -th orders. + +Third, we conducted new experiments to investigate the effects of the parameter $\lambda$ . Specifically, we fixed $\lambda_{1} = 0$ , and adjusted $\lambda_{2} = 0.01, 0.1, 1, 5$ to study the effects of $\lambda_{2}$ . We trained DNNs on the Tiny-ImageNet dataset. First, we trained high-order DNNs by penalizing interactions of the $[0, 0.5n]$ -th orders. Figure 16(b) shows that the interaction strength of the $[0, 0.5n]$ -th orders decreased as $\lambda_{2}$ increased. The results indicated that the larger $\lambda_{2}$ value penalized interactions of the $[0, 0.5n]$ -th orders more significantly. Second, we trained low-order DNNs by penalizing interactions of the $[0.7n, n]$ -th orders. Similarly, we found that the strength of these high-order interactions also decreased as $\lambda_{2}$ increased. This further indicated that the larger $\lambda_{2}$ value penalized the high-order interactions more significantly. We also conducted experiments on the census dataset and the commercial dataset. Specifically, we fixed $\lambda_{1} = 0$ , and adjusted $\lambda_{2} = 0.1, 1, 10$ . We trained high-order DNNs by penalizing interactions of the $[0, 0.5n]$ -th orders. Here, we used the MLP-5 networks mentioned in Section 3.1. Figure 18(a) also shows the same pattern that the larger $\lambda_{2}$ value penalized interactions of the $[0, 0.5n]$ -th orders more significantly. These results showed that the parameter $\lambda_{2}$ controlled the strength of penalization. + +Furthermore, we investigated the effects of $\lambda_{1}$ on two tabular datasets. Specifically, we fixed $\lambda_{2} = 0$ and adjusted $\lambda_{1} = 0.1$ , 1, 10 to study the effects of $\lambda_{1}$ . We trained middle-order DNNs by boosting interactions of the $[0.3n, 0.7n]$ -th orders. Here, we used the MLP-5 networks mentioned in Section 3.1. Figure 18(b) shows that the parameter $\lambda_{1}$ controlled the strength of boosting interactions. + +# C.2.5 TIME COMPLEXITY ANALYSIS. + +We compared the running time between training DNNs by 200 epochs using the proposed losses and normally training DNNs by 200 epochs. We trained these models with a mini-batch size of 128 on a single NVIDIA GeForce RTX 3090 GPU and used 4 subprocesses in data loading. Table 3 shows that the time cost of training DNNs with the proposed losses was about three times as much as the time cost of normally training DNNs, which was acceptable in real applications. + +![](images/8a8bf5a6058f0c7a18920e7a47d8c692d5b513525874b96040528717985efdf6.jpg) +Figure 20: The distributions of interaction strength $\bar{J}^{(m)}$ of different DNNs using new metric. These DNNs were trained on various image datasets and tabular datasets. + +Table 4: Mean values and standard deviations $(\mu \pm \sigma)$ of the projections at the first five principal directions of the gradient (at initialization), which were computed under different sizes of contexts. + +
|S|=0.05n|S|=0.5n|S|=0.95n
The first direction-0.0024±0.10040.0025±0.17400.0103±0.2277
The second direction0.0089±0.08900.0003±0.14340.0076±0.1906
The third direction-0.0005±0.0855-0.0012±0.13500.0139±0.1819
The fourth direction-0.0029±0.0802-0.0012±0.1250-0.0010±0.1543
The fifth direction0.0000±0.07660.0005±0.11610.0022±0.1453
+ +In fact, a more reasonable experiment should compare the time of training the DNN to convergence, i.e. the time cost of training a DNN with a certain decrease of the classification loss. To this end, we plotted curves of the classification losses w.r.t. training epochs of normally trained DNNs and DNNs trained with the proposed $L^{+}(r_{1}, r_{2})$ and $L^{-}(r_{1}, r_{2})$ losses, respectively. Figure 19 shows that the three types of DNNs all achieved convergence at the 200-th epoch, which validated the reliability of the above comparison on the time cost. + +# C.2.6 OTHER VALIDATION EXPERIMENTS + +Using new metric to validate bottleneck. Besides, to further remove effects of magnitudes of $|I^{(m)}(i,j|x)|$ on different samples for fair comparison, we adopted the following new metric. + +$$ +\bar {J} ^ {(m)} = \mathbb {E} _ {x \in \Omega} \frac {\mathbb {E} _ {i , j} [ | I ^ {(m)} (i , j | x) | ]}{\mathbb {E} _ {m ^ {\prime}} [ \mathbb {E} _ {i , j} [ | I ^ {(m ^ {\prime})} (i , j | x) | ] ]} \tag {13} +$$ + +where $\Omega$ denotes the set of all samples. We found that using the new metric resulted in few changes to the distribution of the interaction strength. Figure 20 shows a similar representation bottleneck phenomenon when we used the new metric, i.e., the middle-order interaction was less likely to be encoded in DNNs. + +Validation of zero-mean assumption in Theorem 1. We further verified the reliability of the zero-mean assumption in Theorem 1. To this end, we analyzed the mean value $\mathbb{E}_{i,j,S}[\frac{\partial\Delta v(i,j,S)}{\partial W} ]$ of the gradient w.r.t. the parameter $W$ at the first convolutional layer when we trained ResNet-18 on the Tiny-ImageNet dataset. + +Since the above gradient $\frac{\partial\Delta v(i,j,S)}{\partial W}$ is a high-dimensional vector, it is difficult to visualize the expectation of all dimensions. Alternatively, we inferred and analyzed the projections on the first five principal directions of the gradient. Then, the validation was simplified as analyzing the mean value and standard deviation of each projection. + +Specifically, we computed five principal directions as $o_1, o_2, \ldots, o_5$ . We examined the mean value and the standard deviation of the gradient strength projected on each direction $o_i$ , over gradients w.r.t. different triplets $(i,j,S)$ computed on 10 different samples. Given each input sample, we computed gradients $\frac{\partial \Delta v(i,j,S)}{\partial W}$ for 10 pairs of $(i,j)$ and 1000 contexts $S$ . + +The examination was conducted when $|S| = 0.05n$ , $|S| = 0.5n$ , and $|S| = 0.95n$ . Tables 4 and 5 report the above mean values and standard deviations for the network at initialization and the network trained for 40 epochs, respectively. Tables 4 and 5 show that these mean values were almost 0, which validated the zero-mean assumption in Theorem 1. + +In addition, for simplicity, we used $\sigma^2$ to denote the variance of the gradient $\frac{\partial\Delta v(i,j,S)}{\partial W}$ in Theorem 1, without distinguishing different sizes of contexts $S$ . Tables 4 and 5 show that there existed + +Table 5: Mean values and standard deviations $(\mu \pm \sigma)$ of the projections at the first five principal directions of the gradient (at 40 epochs), which were computed under different sizes of contexts $S$ + +
|S|=0.05n|S|=0.5n|S|=0.95n
The first direction-0.0015±0.40370.0003±0.86750.0423±1.5470
The second direction-0.0209±0.30420.0042±0.56530.0802±0.9754
The third direction-0.0284±0.28170.0070±0.51640.0269±0.7845
The fourth direction0.0208±0.23150.0003±0.50570.0281±0.7231
The fifth direction-0.0018±0.18530.0082±0.4178-0.0358±0.6715
+ +some small difference between variances of the gradient when we considered different sizes of contexts $S$ . However, such difference is negligible in the proof and does not affect the conclusion of Theorem 1. This is because the learning strength of the $m$ -th order interaction is proportional to $\frac{n - m - 1}{n(n - 1)} / \sqrt{\binom{n - 2}{m}}$ , whose difference between different sizes of contexts was much larger than the above difference in variance. + +# C.2.7 CLARIFICATION OF THE INSIGHT OF TABLE 1 + +Table 1 compared the classification accuracy (left part) and compared adversarial robustness (right part) between DNNs encoding different orders of interactions. + +Table 1 did provide some new insights. By comparing results in Table 1 (left) with results in Table 1 (right) and Figures 6, 9, 13, and 14, we found that although the interactions of different orders performed differently in both adversarial robustness (Table 1 (right)) and the capacity of encoding structural information (Figures 6, 9, 13, and 14), interactions of different orders had similar classification accuracies. It showed that although interactions of different orders had their distinctive properties, such difference did not necessarily affect their classification accuracies. \ No newline at end of file diff --git a/discoveringandexplainingtherepresentationbottleneckofdnns/images.zip b/discoveringandexplainingtherepresentationbottleneckofdnns/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..607f6db2fd9221e6f4ba33c8b6eea83599ee9ed2 --- /dev/null +++ b/discoveringandexplainingtherepresentationbottleneckofdnns/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:29de34885b43c0222ecfdeab86d3685a2e94cc55b63076712386860b31900cd8 +size 1537823 diff --git a/discoveringandexplainingtherepresentationbottleneckofdnns/layout.json b/discoveringandexplainingtherepresentationbottleneckofdnns/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b94082603251b4c1bac2e2a36097103e473155f9 --- /dev/null +++ b/discoveringandexplainingtherepresentationbottleneckofdnns/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:baf7d6d5dccd2afee8599532648ade8768408383033b488975a0b1a4c6be4649 +size 1202283 diff --git a/dominodiscoveringsystematicerrorswithcrossmodalembeddings/95d3a1cd-9134-48e3-bc32-1a49460eb0a1_content_list.json b/dominodiscoveringsystematicerrorswithcrossmodalembeddings/95d3a1cd-9134-48e3-bc32-1a49460eb0a1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9c79a0ac547b52fc97f304be53965524c4675963 --- /dev/null +++ b/dominodiscoveringsystematicerrorswithcrossmodalembeddings/95d3a1cd-9134-48e3-bc32-1a49460eb0a1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4b973a3cb798f573314761ac4d32a5378919ebe4a99fb7e1d602104e28e6daf8 +size 168654 diff --git a/dominodiscoveringsystematicerrorswithcrossmodalembeddings/95d3a1cd-9134-48e3-bc32-1a49460eb0a1_model.json b/dominodiscoveringsystematicerrorswithcrossmodalembeddings/95d3a1cd-9134-48e3-bc32-1a49460eb0a1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b6495d3c8e8ee7be4eb37b33e9887af59c85a218 --- /dev/null +++ b/dominodiscoveringsystematicerrorswithcrossmodalembeddings/95d3a1cd-9134-48e3-bc32-1a49460eb0a1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d65d05aa931ef7fc94cf64fa75e9d54c25d17a75dec6406f9cc5fd4279595fe1 +size 222191 diff --git a/dominodiscoveringsystematicerrorswithcrossmodalembeddings/95d3a1cd-9134-48e3-bc32-1a49460eb0a1_origin.pdf b/dominodiscoveringsystematicerrorswithcrossmodalembeddings/95d3a1cd-9134-48e3-bc32-1a49460eb0a1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f2f5c62765e0f9533a88e57662cd4b69daacf9b2 --- /dev/null +++ b/dominodiscoveringsystematicerrorswithcrossmodalembeddings/95d3a1cd-9134-48e3-bc32-1a49460eb0a1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1a3e4a0a27c04b0c41de88ecd18b6f77de7d10ed863acf7c89a4d6bfbc988c10 +size 14396526 diff --git a/dominodiscoveringsystematicerrorswithcrossmodalembeddings/full.md b/dominodiscoveringsystematicerrorswithcrossmodalembeddings/full.md new file mode 100644 index 0000000000000000000000000000000000000000..db750dc3b1360a9829416800ba87cff684abe43d --- /dev/null +++ b/dominodiscoveringsystematicerrorswithcrossmodalembeddings/full.md @@ -0,0 +1,760 @@ +# DOMINO: DISCOVERING SYSTEMATIC ERRORS WITH CROSS-MODAL EMBEDDINGS + +Sabri Eyuboglu*, Maya Varma*, Khaled Saab*, Jean-Benoit Delbrouck, Christopher Lee-Messer, Jared Dunnmon, James Zou, Christopher Ré + +Stanford University, USA; {eyuboglu, mvarma2, ksaab}@stanford.edu; (*Equal contribution) + +# ABSTRACT + +Machine learning models that achieve high overall accuracy often make systematic errors on important subsets (or slices) of data. Identifying underperforming slices is particularly challenging when working with high-dimensional inputs (e.g. images, audio), where important slices are often unlabeled. In order to address this issue, recent studies have proposed automated slice discovery methods (SDMs), which leverage learned model representations to mine input data for slices on which a model performs poorly. To be useful to a practitioner, these methods must identify slices that are both underperforming and coherent (i.e. united by a human-understandable concept). However, no quantitative evaluation framework currently exists for rigorously assessing SDMs with respect to these criteria. Additionally, prior qualitative evaluations have shown that SDMs often identify slices that are incoherent. In this work, we address these challenges by first designing a principled evaluation framework that enables a quantitative comparison of SDMs across 1,235 slice discovery settings in three input domains (natural images, medical images, and time-series data). Then, motivated by the recent development of powerful cross-modal representation learning approaches, we present Domino, an SDM that leverages cross-modal embeddings and a novel error-aware mixture model to discover and describe coherent slices. We find that Domino accurately identifies $36\%$ of the 1,235 slices in our framework - a 12 percentage point improvement over prior methods. Further, Domino is the first SDM that can provide natural language descriptions of identified slices, correctly generating the exact name of the slice in $35\%$ of settings. + +# 1 INTRODUCTION + +Machine learning models often make systematic errors on important subsets (or slices) of data1. For instance, models trained to detect collapsed lungs in chest X-rays have been shown to make predictions based on the presence of chest drains, a device typically used during treatment (Oakden-Rayner et al., 2019). As a result, these models frequently make prediction errors on cases without chest drains, a critical data slice where false negative predictions could be life-threatening. Similar performance gaps across slices have been observed in radiograph classification (Badgeley et al., 2019; Zech et al., 2018; DeGrave et al., 2021), melanoma detection (Winkler et al., 2019), natural language processing (Orr et al., 2020; Goel et al., 2021), and object detection (de Vries et al., 2019), among others. If underperforming slices can be accurately identified and labeled, we can then improve model robustness by either updating the training dataset or using robust optimization techniques (Zhang et al., 2018; Sagawa et al., 2020). + +However, identifying underperforming slices is difficult in practice. When working with high-dimensional inputs (e.g. images, time-series data, video), slices are often "hidden", meaning that they cannot easily be extracted from the inputs and are not annotated in metadata (Oakden-Rayner et al., 2019). For instance, in the collapsed lung example above, the absence of chest drains is challenging to identify from raw image data and may not be explicitly labeled in metadata. In this setting, we must perform slice discovery: the task of mining unstructured input data for semantically meaningful subgroups on which the model performs poorly. + +![](images/34576c051e6288caffaf7f7cf1a8eff865a9bf8989e96eb793d0ae547118872e.jpg) +Figure 1: Proposed Approach. (Left) We design an evaluation framework to systematically compare SDMs across diverse slice settings. Here, the example slice setting includes a dataset that displays a strong correlation between the presence of birds and skies. (Right) A classifier trained to detect the presence of birds makes false positive predictions on skies without birds. We present Domino, a novel SDM that uses cross-modal embeddings to identify and describe the error slice. + +In modern machine learning workflows, practitioners commonly attempt slice discovery with a combination of feature-based interpretability methods (e.g. GradCAM, LIME) and manual inspection (Selvaraju et al., 2017; Ribeiro et al., 2016). However, these approaches are time-consuming and susceptible to confirmation bias (Adebayo et al., 2018). As a result, recent works have proposed automated slice discovery methods (SDMs), which use learned input representations to identify semantically meaningful slices where the model makes prediction errors (d'Eon et al., 2021; Yeh et al., 2020a; Sohoni et al., 2020; Kim et al., 2018; Singla et al., 2021). An ideal SDM should automatically identify data slices that fulfill two desiderata: (a) slices should contain examples on which the model underperforms, or has a high error rate and (b) slices should contain examples that are coherent, or align closely with a human-understandable concept. An SDM that is able to reliably satisfy these desiderata across a wide range of settings has yet to be demonstrated for two reasons: + +Issue 1: No quantitative evaluation framework exists for measuring performance of SDMs with respect to these desiderata. Existing SDM evaluations are either qualitative (d'Eon et al., 2021), performed on purely synthetic data (Yeh et al., 2020a), or consider only a small selection of tasks and slices (Sohoni et al., 2020). A comprehensive evaluation framework should be quantitative, use realistic data, cover a broad range of contexts, and evaluate both underperformance and coherence. Currently, no datasets or frameworks exist to support such an evaluation, making it difficult to evaluate the tradeoffs among prior SDMs. + +Issue 2: Prior qualitative evaluations have demonstrated that existing SDMs often identify slices that are incoherent. A practically useful SDM should discover coherent slices that are understandable by a domain expert. For example, in the chest X-ray setting described earlier, the slice "patients without chest drains" is meaningful to a physician. Slice coherence has previously been evaluated qualitatively by requiring users to manually inspect examples and identify common attributes (d'Eon et al., 2021; Yeh et al., 2020a). Such evaluations have shown that discovered slices often do not align with concepts understandable to a domain expert. Additionally, even if slices do align well with concepts, it may be difficult for humans to identify the shared attribute. Thus, an ideal SDM would not only output coherent slices, but also identify the concept connecting examples in each slice. + +In this work, we address both of these issues by (1) developing a framework to quantitatively evaluate the effectiveness of slice discovery methods at scale and (2) leveraging this framework to demonstrate that a powerful class of recently-developed cross-modal embeddings can be used to create an SDM that satisfies the above desiderata. Our approach – Domino – identifies coherent slices and generates automated slice descriptions. + +After formally describing the slice discovery problem in Section 2, we introduce an evaluation framework for rigorously assessing SDM performance in Section 3. We curate a set of 1,235 slice discovery settings, each consisting of a real-world dataset, a trained model, and one or more "ground truth" slices corresponding to a concept in the domain. During evaluation, the SDM is provided with the dataset and the model, and we measure if the labeled slices can be successfully identified. We find that existing methods identify "ground truth" slices in no more than $23\%$ of these settings. + +Motivated by the recent development of large cross-modal representation learning approaches (e.g. CLIP) that embed inputs and text in the same latent representation space, in Section 4 we present Domino, a novel SDM that uses cross-modal embeddings to identify coherent slices. Cross-modal representations incorporate semantic meaning from text into input embeddings, which we demonstrate can improve slice coherence and enable the generation of slice descriptions. Domino embeds inputs alongside natural language with cross-modal representations, identifies coherent slices with an error-aware Gaussian mixture model, and generates natural language descriptions for discovered slices. In Section 5, we use our evaluation framework to show that Domino identifies $36\%$ of the "ground truth" coherent slices across three input domains (natural images, medical images, and time-series) – a 12 percentage point improvement over existing methods. + +# 2 RELATED WORK + +Slice performance gaps. Machine learning models are often limited by the presence of underperforming slices. In Section A.2, we provide a survey of underperforming slices reported by prior studies across a range of application domains and slice types. Oakden-Rayner et al. (2019) referred to this problem as "hidden stratification" and motivated the need for slice discovery techniques. + +Slice discovery. Prior work on slice discovery has predominantly focused on input datasets with rich structure (e.g. tabular data) or metadata, where slicing can generally be performed with predicates (e.g. nationality = Peruvian) (Chung et al., 2019; Sagadeeva & Boehm, 2021; Chen et al., 2021). The slice discovery problem becomes particularly complex when input data lacks explicit structure (e.g. images, audio, etc.) and metadata, and three recent studies present methods for performing slice discovery in this unstructured setting (d'Eon et al., 2021; Sohoni et al., 2020; Kim et al., 2018; Singla et al., 2021). The proposed SDMs follow two steps: (1) embed input data in a representation space and (2) identify underperforming slices using clustering or dimensionality reduction techniques. These SDMs are typically evaluated by measuring performance over a limited number of slice settings or by performing qualitative assessments. The trade-offs between these SDMs have not been systematically compared, and as a result, the conditions under which these SDMs succeed at identifying coherent slices remain unclear. + +Benchmark datasets for machine learning robustness. Recently, several benchmark datasets have been proposed for evaluating the performance of models on dataset shifts. These benchmarks are valuable because they provide labels specifying important slices of data. However, these datasets do not suffice for systematic evaluations of SDMs because they either only annotate a small number of slices (Koh et al., 2021) or do not provide pretrained models that are known to underperform on the slices (He et al., 2021; Khosla et al., 2011; Hendrycks & Dietterich, 2019; Liang & Zou, 2021). + +Cross-modal embeddings. Cross-modal representation learning approaches, which embed input data and text in the same representation space, yield powerful embeddings that have contributed to large performance improvements across information retrieval and classification tasks. Cross-modal models generate semantically meaningful input representations that have been shown to be highly effective on zero-shot classification tasks (Radford et al., 2021). Cross-modal models that have inspired our work include CLIP for natural images (Radford et al., 2021), ConVIRT for medical images (Zhang et al., 2020), and WikiSatNet (Uzkent et al., 2019) for satellite imagery. + +# 3 SLICE DISCOVERY PRELIMINARIES + +In this section, we formulate the slice discovery problem. Consider a standard classification setting with input $X \in \mathcal{X}$ (e.g. an image, time-series, or graph) and label $Y \in \mathcal{Y}$ over $|\mathcal{Y}|$ classes. Additionally, assume there exists a set of $k$ slices that partition the data into coherent (potentially overlapping) subgroups, where each subgroup captures a distinct concept or attribute that would be familiar to a domain expert. For each input, we represent slice membership as $\mathbf{S} = \{S^{(j)}\}_{j=1}^k \in \{0,1\}^k$ . As an example, in the scenario presented in Section 1, $X$ represents chest X-rays, $Y$ is a binary label indicating the presence of collapsed lungs, and $\mathbf{S} = \{S^{(1)}, S^{(2)}\}$ represents two slices: one consisting of normal X-rays with chest drains and the other consisting of collapsed lungs without chest drains. + +The inputs, labels, and slices vary jointly according to a probability distribution $P(X,Y,\mathbf{S})$ over $\mathcal{X} \times \mathcal{Y} \times \{0,1\}^k$ . We assume that training, validation and test data are drawn independently and + +identically from this distribution. For some application-dependent value of $\epsilon$ , a model $h_\theta : \mathcal{X} \to \mathcal{Y}$ exhibits degraded performance with respect to a slice $S^{(j)}$ and metric $\ell : \mathcal{Y} \times \mathcal{Y} \to \mathbb{R}$ if $\mathbb{E}_{X,Y|S^{(j)} = 1}[\ell(h_\theta(X), Y)] < \mathbb{E}_{X,Y|S^{(j)} = 0}[\ell(h_\theta(X), Y)] - \epsilon$ . + +Assuming that a trained classifier $h_\theta : \mathcal{X} \to \mathcal{Y}$ exhibits degraded performance on each of the $k$ slices in $\mathbf{S}$ , we define the slice discovery problem as follows: + +- Inputs: a trained classifier $h_\theta$ and labeled dataset $\mathcal{D} = \{(x_i, y_i)\}_{i=1}^n$ with $n$ samples drawn from $P(X, Y)$ . +- Output: a set of $\hat{k}$ slicing functions $\Psi = \{\psi^{(j)} : \mathcal{X} \times \mathcal{Y} \to \{0,1\}\}_{j=1}^{\hat{k}}$ that partition the data into $\hat{k}$ subgroups. + +We consider an output to be successful if, for each ground truth slice $S^{(u)}$ , a slicing function $\psi^{(v)}$ predicts $S^{(u)}$ with precision above some threshold $\beta$ : + +$$ +\forall u \in [ k ]. \quad \exists v \in [ \hat {k} ]. \quad P (S ^ {(u)} = 1 | \psi^ {(v)} (X, Y) = 1) > \beta . +$$ + +A slice discovery method (SDM), $M(\mathcal{D}, h_{\theta}) \to \Psi$ , aims to solve the slice discovery problem. + +# 4 SLICE DISCOVERY EVALUATION FRAMEWORK + +It is challenging to measure how well an SDM satisfies the following desiderata outlined in Section 1: (a) the model $h_\theta$ should exhibit degraded performance on slices predicted by the SDM and (b) slices predicted by the SDM should be coherent. Most publicly available machine learning datasets do not provide labels identifying coherent, underperforming slices. As a result, existing evaluations are either qualitative (d'Eon et al., 2021), synthetic (Yeh et al., 2020b), or small-scale (Sohoni et al., 2020). In this section, we propose an evaluation framework for estimating the success rate of an SDM: how often it successfully identifies the coherent slices on which the model underperforms. + +We propose evaluating SDMs across large sets of slice discovery settings, each consisting of: (1) a labeled dataset $\mathcal{D} = \{(x_i,y_i)\}_{i = 1}^n$ , (2) a model $h_\theta$ trained on $\mathcal{D}$ , and (3) ground truth slice annotations $\{\mathbf{s}_i\}_{i = 1}^n$ for one or more coherent slices $\mathbf{S}$ on which the model $h_\theta$ exhibits degraded performance. As discussed in Section 3, (1) and (2) correspond to the inputs to the SDM, while (3) corresponds to the expected output. Using these slice discovery settings, we can estimate how often an SDM $M(\mathcal{D},f_{\theta})\to \Psi$ successfully identifies the slices $\mathbf{S}$ . Algorithm 1 details the procedure. + +Algorithm 1 SDM Evaluation Process +for $(\mathcal{D},\mathbf{s},h_{\theta})\in$ settings do + $\Psi \leftarrow M(\mathcal{D}_{\mathrm{valid}},h_{\theta})\triangleright$ Fit the SDM on the validation set, yielding a set of slicing functions $\Psi$ +for $j\in [\hat{k} ]$ do + $\hat{\mathbf{s}}^{(j)}\gets \psi^{(j)}(\mathcal{D}_{\mathrm{test}})$ $\triangleright$ Apply the slicing functions to the test set, yielding $\hat{\mathbf{s}}\in [0,1]^{n_{\mathrm{test}}}$ . +end for +metrics $\leftarrow \{\max_{j\in [\hat{k} ]}L(\mathbf{s}^{(i)},\hat{\mathbf{s}}^{(j)})\}_{i = 1}^{k}$ $\triangleright$ Compute metric $L$ comparing $\hat{\mathbf{s}}$ and s +end for + +In Section 4.1, we propose a process for generating representative slice discovery settings and in Section 4.2, we describe the evaluation metrics $L$ and datasets that we use. + +# 4.1 GENERATING SLICE DISCOVERY SETTINGS + +To obtain accurate estimates of SDM performance, our slice discovery settings should be both representative of real-world slice discovery settings and large in number. However, such slice discovery settings aren't readily available, since few datasets specify slices on which models underperform. + +Here, we propose a framework for programmatically generating a large number of realistic slice discovery settings. We begin with a base dataset $\mathcal{D}_{\mathrm{base}}$ that has either a hierarchical label structure (e.g. ImageNet) or rich metadata accompanying each example (e.g.. CelebA). We select a target + +variable $Y$ and slice variable $\mathbf{S}$ , each defined in terms of the class structure or metadata. This allows us to derive target and slice labels $\{(y_i, \mathbf{s}_i)\}_{i=1}^n$ directly from the dataset. In addition, because the slice $\mathbf{S}$ is defined in terms of meaningful annotations, we know that the slice is coherent. After selecting a target variable and slice variable, we (1) generate a dataset $\mathcal{D}$ (Section 4.1.1) and (2) generate a model $h_\theta$ that exhibits degraded performance with respect to $\mathbf{S}$ (Section 4.1.2). + +# 4.1.1 DATASET GENERATION + +We categorize each slice discovery setting based on the underlying reason that the model $h_{\theta}$ exhibits degraded performance on the slices S. We survey the literature for examples of underperforming slices in the wild, which we document in Section A.2. Based on our survey and prior work (Oakden-Rayner et al., 2019), we identify three common slice types: rare slices, correlation slices, and noisy label slices. We provide expanded descriptions in Section A.3. + +Rare slice. Models often underperform on rare data slices, which consist of data subclasses that occur infrequently in the training set (e.g. patients with a rare diseases, photos taken at night). To generate settings with rare slices, we construct $\mathcal{D}$ such that for a given class label $Y$ , elements in subclass $C$ occur with proportion $\alpha$ , where $0.01 < \alpha < 0.1$ . + +Correlation slice. If a target variable $Y$ (e.g. pneumothorax) is correlated with another variable $C$ (e.g. chest tubes), the model may learn to rely on $C$ to make predictions. To generate settings with correlation slices, we construct $\mathcal{D}$ such that a linear correlation of strength $\alpha$ exists between the target variable and other class labels, where $0.2 < \alpha < 0.8$ . (Details in Section A.3.2). + +Noisy label slice. Particular slices of data may exhibit higher label error rates than the rest of the training distribution. To generate settings with noisy labels, we construct $\mathcal{D}$ such that for each class label $Y$ , the elements in subclass $C$ exhibit label noise with probability $\alpha$ , where $0.01 < \alpha < 0.3$ . + +# 4.1.2 MODEL GENERATION + +Each slice discovery setting includes a model $h_{\theta}$ that exhibits degraded performance with respect to a set of slices S. We propose generating two classes of models: (a) trained models, which are realistic, and (b) synthetic models, which provide greater control over the evaluation. + +Trained Models. We train a distinct model $h_\theta$ across each of our generated datasets $D$ . If the model $h_\theta$ exhibits a degradation in performance with respect to the slices $\mathbf{S}$ , then the model $h_\theta$ , dataset $D$ , and slices $\mathbf{S}$ will comprise a valid slice discovery setting. (See Section 3 for the formal definition of degraded performance; we use accuracy for our metric $\ell$ and set $\epsilon = 10$ .) + +Synthetic Models. Because the trained model $h_{\theta}$ may underperform on coherent slices other than S, an SDM that fails to identify S may still recover other coherent, underperforming slices. This complicates the interpretation of our results. To address this subtle issue, we also create settings using synthetic models $\bar{h} : [0,1]^k \times \mathcal{Y} \to [0,1]$ that simulate predictions. This allows us to distribute errors outside of S randomly, ensuring that there are likely no underperforming, coherent slices outside of $S$ . We simulate predictions by sampling from beta distributions (see Section A.3.4). + +# 4.2 EVALUATION APPROACH + +We instantiate our evaluation framework by generating 1,235 slice discovery settings across a number of different tasks, applications, and base datasets. Detailed statistics on our settings are provided in Table 1. We generate slice discovery settings in the following three domains: + +Natural Images (CelebA and ImageNet): The CelebFaces Attributes Dataset (CelebA) includes over 200k images with 40 labeled attributes (Liu et al., 2015). ImageNet includes 1.2 million images across 1000 labeled classes organized in a hierarchical structure (Deng et al., 2009; Fellbaum, 1998). + +Medical Images (MIMIC-CXR): The MIMIC Chest X-Ray (MIMIC-CXR) dataset includes 377,110 chest x-rays collected from the Beth Israel Deaconess Medical Center. Annotations indicate the presence or absence of fourteen conditions (Johnson et al., 2019; 2020). + +Medical Time-Series Data (EEG): In addition to our image modalities, we also explore time-series data. We obtain a dataset of short 12 second electroencephalography (EEG) signals, which have been used in prior work for predicting the onset of seizures (Saab et al., 2020). + +![](images/a17bfff4a12af98e1ae0ad1c7f41b932135252463d2ed4cae01f9a438df757b0.jpg) +Figure 2: Evaluation Framework. We propose a framework for generating slice discovery settings from any base dataset with class structure or metadata. + +We evaluate SDM performance with precision-at-k, which measures the proportion of the top $k$ elements in the discovered slice that are in the ground truth slice. We use $k = 10$ in this work. + +# 5 DOMINO + +In this section, we introduce Domino, an SDM that uses cross-modal embeddings to identify coherent slices and generate natural language descriptions. Domino follows a three-step procedure: + +1. **Embed (Section 5.1):** We encode the inputs $\{x_{i}\}_{i = 1}^{n}$ in a cross-modal embedding space via a function $g_{\mathrm{input}}: \mathcal{X} \to \mathbb{R}^d$ . We learn this embedding function $g_{\mathrm{input}}$ jointly with an embedding function $g_{\mathrm{text}}: \mathcal{T} \to \mathbb{R}^d$ that embeds text in the same space as the inputs. +2. Slice (Section 5.2): We identify underperforming regions in the cross-modal embedding space using an error-aware mixture model fit on the input embeddings $\mathbf{z}_{\mathrm{input}} := \{z_i := g_{\mathrm{input}}(x_i)\}_{i=1}^n$ , model predictions $\{\hat{y}_i := h_\theta(x_i)\}_{i=1}^n$ , and true class labels $\{y_i\}_{i=1}^n$ . This yields $\hat{k}$ slicing functions of the form $\psi_{\mathrm{slice}}^{(j)}: \mathcal{X} \times \mathcal{Y} \to \{0,1\}$ . +3. Describe (Section 5.3): Finally, we use the text embedding function $g_{\mathrm{text}}$ learned in step (1) to generate a set of $\hat{k}$ natural language descriptions of the discovered slices. + +# 5.1 EMBEDDING INPUTS WITH CROSS-MODAL REPRESENTATIONS + +Cross-modal representation learning algorithms embed input examples and paired text data in the same latent representation space. Formally, given a dataset of paired inputs $V \in \mathcal{X}$ and text descriptions $T \in \mathcal{T}$ , $\mathcal{D}_{\mathrm{paired}} = \{(v_i,t_i)\}_{i=1}^{n_{\mathrm{paired}}}$ , we learn two embedding functions $g_{\mathrm{input}}: \mathcal{X} \to \mathbb{R}^d$ and $g_{\mathrm{text}}: \mathcal{T} \to \mathbb{R}^d$ such that the distances between pairs of embeddings $dist(g_{\mathrm{input}}(v_i), g_{\mathrm{text}}(t_j))$ reflect the semantic similarity between $v_i$ and $t_j$ for all $i, j \leq n_{\mathrm{paired}}$ . + +Ultimately, this joint training procedure enables the creation of semantically meaningful input embeddings that incorporate information from text. In this work, our key insight is that input representations generated from cross-modal learning techniques encode the semantic knowledge necessary for identifying coherent slices. Our method relies on the assumption that we have access to either (a) pretrained cross-modal embedding functions or (b) a dataset with paired input-text data that can be used to learn cross-modal embedding functions. It is important to note that paired data is only required if a practitioner wishes to generate custom cross-modal embedding functions. + +Domino uses four types of cross-modal embeddings to enable slice discovery across our input domains: CLIP (Radford et al., 2021), ConVIRT (Zhang et al., 2020), MIMIC-CLIP, and EEG-CLIP. We adapt CLIP and ConVIRT from prior work, and we train MIMIC-CLIP and EEG-CLIP on large datasets with paired inputs and text (implementation details are provided in Section A.4.1). + +# 5.2 CLUSTERING EMBEDDINGS WITH ERROR-AWARE MIXTURE MODEL + +We then proceed to the second step in the Domino pipeline: slicing. Recall that our goal is to find a set of $\hat{k}$ slicing functions that partition our data into coherent and underperforming slices. Taking inspiration from the recently developed Spotlight algorithm (d'Eon et al., 2021), we propose + +a mixture model that jointly models the input embeddings, class labels, and model predictions. This encourages slices that are homogeneous with respect to error type (e.g. all false positives). The model assumes that data is generated according to the following generative process: each example is randomly assigned membership to a single slice according to a categorical distribution $\mathbf{S} \sim Cat(\mathbf{p}_{\mathbf{S}})$ with parameter $\mathbf{p}_{\mathbf{S}} \in \{\mathbf{p} \in \mathbb{R}_+^k : \sum_{i=1}^k p_i = 1\}$ . Given membership in slice $j$ , the embeddings are normally distributed $Z|S^{(j)} = 1 \sim \mathcal{N}(\mu^{(j)}, \Sigma^{(j)})$ with parameters mean $\mu^{(j)} \in \mathbb{R}^d$ and covariance $\Sigma^{(j)} \in \mathbb{S}_{++}^d$ (the set of symmetric positive definite $d \times d$ matrices), the labels vary as a categorical $Y|S^{(j)} = 1 \sim Cat(\mathbf{p}^{(j)})$ with parameter $\mathbf{p}^{(j)} \in \{\mathbf{p} \in \mathbb{R}_+^c : \sum_{i=1}^c p_i = 1\}$ , and the model predictions also vary as a categorical $\hat{Y}|S^{(j)} = 1 \sim Cat(\hat{\mathbf{p}}^{(j)})$ with parameter $\hat{\mathbf{p}}^{(j)} \in \{\hat{\mathbf{p}} \in \mathbb{R}_+^c : \sum_{i=1}^c \hat{p}_i = 1\}$ . This assumes that the embedding, label, and prediction are all independent conditioned on the slice. + +The mixture model is parameterized by $\phi = [\mathbf{p}_S, \{\mu^{(j)}, \Sigma^{(j)}, \mathbf{p}^{(j)}, \hat{\mathbf{p}}^{(j)}\}_{j=1}^{\bar{k}}]$ . The log-likelihood over the validation dataset $D_v$ is given as follows and maximized using expectation-maximization: + +$$ +\ell (\phi) = \sum_ {i = 1} ^ {n} \log \sum_ {j = 1} ^ {\bar {k}} P (S ^ {(j)} = 1) P (Z = z _ {i} | S ^ {(j)} = 1) P (Y = y _ {i} | S ^ {(j)} = 1) ^ {\gamma} P (\hat {Y} = h _ {\theta} (x _ {i}) | S ^ {(j)} = 1) ^ {\gamma}, \tag {1} +$$ + +where $\gamma$ is a hyperparameter that balances the trade-off between coherence and under-performance, our two desiderata (see Section A.4.2). + +# 5.3 GENERATING NATURAL LANGUAGE DESCRIPTIONS OF DISCOVERED SLICES + +Domino can produce natural language statements that describe characteristics shared between examples in the discovered slices. To generate slice descriptions, we begin by sourcing a corpus of candidate natural language phrases $\mathcal{D}_{\mathrm{text}} = \{t_j\}_{j=1}^{n_{\mathrm{text}}}$ . Critically, this text data can be sourced independently and does not need to be paired with examples in $\mathcal{D}$ . In Section A.4.3, we describe an approach for generating a corpus of phrases relevant to the domain. + +We generate an embedding for each phrase in $\mathcal{D}_{\mathrm{text}}$ using the cross-modal embedding function $g_{\mathrm{text}}$ , yielding $\{z_j^{\mathrm{text}} := g_{\mathrm{text}}(t_j)\}_{j=1}^{n_{\mathrm{text}}}$ . Then, we compute a prototype embedding for each of the discovered slices by taking the weighted average of the input embeddings in the slice, $\{\bar{z}_{\mathrm{slice}}^{(i)} := \psi_{\mathrm{slice}}^{(i)}(\mathbf{x}, \mathbf{y})^\top \mathbf{z}_{\mathrm{input}}\}_{i=1}^k$ . We also compute a prototype embedding for each class $\{\bar{z}_{\mathrm{class}}^{(c)} := \frac{1}{n_c} \sum_{i=1}^n \mathbf{1}[y_i = c] z_i^{\mathrm{input}}\}_{c=1}^C$ . To distill the slice prototypes, we subtract out the prototype of the most common class in the slice $\bar{z}_{\mathrm{slice}}^{(i)} - \bar{z}_{\mathrm{class}}^{(c)}$ . Finally, to find text that describes each slice, we compute the dot product between the distilled slice prototypes and the text embeddings and return the phrase with the highest value: $\operatorname{argmax}_{j \in [n_{\mathrm{text}}} z_j^{\mathrm{textT}}(\bar{z}_{\mathrm{slice}}^{(i)} - \bar{z}_{\mathrm{class}}^{(c)})$ . + +# 6 EXPERIMENTS + +We use the evaluation framework developed in Section 4 to systematically assess Domino, comparing it to existing SDMs across 1,235 slice discovery settings. Our experiments validate the three core design choices behind Domino: (1) the use of cross-modal embeddings, (2) the use of a novel error-aware mixture model, and (3) the generation of natural language descriptions for slices. We provide SDM implementation details in Section A.3.2 and extended evaluations in Section A.5. + +# 6.1 CROSS-MODAL EMBEDDINGS IMPROVE SDM PERFORMANCE + +In this section, we evaluate the effect of embedding type on performance. We hold our error-aware slicing algorithm (Step 2) constant and vary our choice of embedding (Step 1). + +Natural Images. We compare four embeddings: final-layer activations of a randomly-initialized ResNet-50 (He et al., 2016), final-layer activations of $h_{\theta}$ , BiT (Kolesnikov et al., 2019), and CLIP (Radford et al., 2021). CLIP embeddings are cross-modal. Results are shown in Figure 3. + +![](images/a1918e4703a0052d8d6c223502a2cf8df8c5eed62a05a92d4c7e89daa96467df.jpg) +Figure 3: Cross-modal embeddings enable accurate slice discovery. Using our evaluation framework, we demonstrate that the use of cross-modal embeddings leads to consistent improvements in slice discovery across three datasets and two input modalities (1,235 settings). + +When evaluating with synthetic models, we find that using CLIP embeddings results in a mean precision-at-10 of 0.570 (95% CI: 0.554, 0.586), a 9 percentage point increase over BiT embeddings and a 23 percentage point increase over random activations.[2] + +When evaluating with trained models, we find no difference between using CLIP embeddings and BiT embeddings. However, both outperform activations of the trained classifier $h_{\theta}$ by nearly 15 percentage points in mean precision-at-10. This finding is of particular interest given that classifier activations are a popular embedding choice in prior SDMs (d'Eon et al., 2021; Sohoni et al., 2020). Notably, the gap between CLIP and $h_{\theta}$ activations is much smaller in settings with correlation slices. This makes sense because a model that relies on a correlate to make predictions will likely capture information about the correlate in its activations (Sohoni et al., 2020). + +Medical Images. We compare five embeddings: the final-layer activations of a ResNet-50 pretrained on ImageNet (He et al., 2016), the final-layer activations of the trained classifier $h_{\theta}$ , BiT (Kolesnikov et al., 2019), and domain-specific cross-modal embeddings that we trained using two different methods: MIMIC-CLIP and ConVIRT. For synthetic models, cross-modal ConVIRT embeddings enable a mean precision-at-10 of 0.765 (95% CI: 0.747, 0.784), a 7 point improvement over the best unimodal embeddings (BiT) with mean precision-at-10 of 0.695 (95% CI: 0.674, 0.716). For trained models, we again find that although $h_{\theta}$ activations perform the worst on rare and noisy label slices, they are competitive with cross-modal embeddings on correlation slices. + +Medical Time Series. For our EEG dataset, we compare the final-layer activations of a pretrained seizure classifier and a CLIP-style cross-modal embedding trained on EEG-report pairs. When evaluating with synthetic models, we find that cross-modal embeddings recover coherent slices with a mean precision-at-10 of 0.697 (95% CI: 0.605, 0.784). This represents a 17 point gain over using unimodal embeddings 0.532 (95% CI: 0.459, 0.608). Cross-modal embeddings also outperform unimodal embeddings when evaluated with trained models. This demonstrates that cross-modal embeddings can aid in recovering coherent slices even in input modalities other than images. + +![](images/632e575ea952ad4fe6e6b3d4d07f01d2f1de21da3d8149d58c3b7e192cbb22ba.jpg) +Figure 4: Error-aware mixture model enables accurate slice discovery. When cross-modal embeddings are provided as input, our error-aware mixture model often outperforms previously-designed SDMs. Results on medical images and medical time-series data are in Section A.5. + +# 6.2 ERROR-AWARE MIXTURE MODEL IMPROVES SDM PERFORMANCE + +In order to understand the effect of slicing algorithms on performance, we hold the cross-modal embeddings (Step 1) constant and vary the slicing algorithm (Step 2). We compare the error-aware mixture model to four prior SDMs: George (Sohoni et al., 2020), Multiaccuracy (Kim et al., 2018), Spotlight (d'Eon et al., 2021), and a baseline we call ConfusionSDM, which outputs slicing functions that partition data into the cells of the confusion matrix. We provide cross-modal embeddings as input to all five SDMs. On noisy and rare slices in natural images, the error-aware mixture model recovers ground truth slices with a mean precision-at-10 of 0.639 (95% CI: 0.617,0.660) - this represents a $105\%$ improvement over the next-best method, George. Interestingly, on correlation slices, the naive ConfusionSDM baseline outperforms our error-aware mixture model. Extended evaluations are provided in Section A.5. + +# 6.3 DOMINO GENERATES NATURAL LANGUAGE DESCRIPTIONS OF DISCOVERED SLICES. + +Domino is the first SDM that can generate natural language descriptions for identified slices. For natural images, we provide a quantitative analysis of these descriptions. Specifically, since Domino returns a ranking over all phrases in the corpus $\mathcal{D}_{\mathrm{text}}$ , we can compute the percentage of settings in which the name of the "ground truth" slice (or a WordNet synonym (Fellbaum, 1998)) appears in the top- $k$ words returned by Domino. In Figure 9, we plot this percentage for $k = 1$ to $k = 10$ . We find that for $34.7\%$ of rare slices, $41.0\%$ of correlation slices, and $39.0\%$ of noisy label slices, Domino ranks the name of the slice (or a synonym) first out of the thousands of phrases in our corpus. In $57.4\%$ , $55.4\%$ , and $48.7\%$ of rare, correlation, and noisy label slices respectively, Domino ranks the phrase in the top ten. In Section A.1, we show that Domino is successful at recovering accurate explanations for natural image, medical image, and medical time series data. + +# 7 CONCLUSION + +In this work, we analyze the slice discovery problem. First, we observe that existing approaches for evaluating SDM performance do not allow for large-scale, quantitative evaluations. We address this challenge by introducing a programmable framework to measure SDM performance across two axes: underperformance and coherence. Second, we propose Domino, a novel SDM that combines cross-modal representations with an error-aware mixture model. Using our evaluation framework, we demonstrate that the embedding and slicing steps of Domino outperform those of existing SDMs. We also show for the first time that using cross-modal embeddings for slice discovery can enable the generation of semantically meaningful slice descriptions. Notably, Domino requires only black-box access to models and can thus be broadly useful in settings where users have API access to models. Future directions include executing controlled user studies to evaluate when generated explanations are actionable, developing strategies for computing input embeddings when access to both pre-trained cross-modal embeddings and paired input-text data is limited, and exploring strategies for improving slice discovery in settings where slice strength, $\alpha$ , is low. We hope that our evaluation framework accelerates the development of slice discovery methods and that Domino will help practitioners better evaluate their models. + +# 8 REPRODUCIBILITY STATEMENT + +We provide an open-source implementation of our evaluation framework at https://github.com/HazyResearch/domino. Users can run Domino on their own models and datasets by installing our Python package via: pip install domino. + +# 9 ETHICS STATEMENT + +Domino is a tool for identifying systematic model errors. No matter how effective it is at this task, there may still be failure-modes Domino will not catch. There is a legitimate concern that model debugging tools like Domino could give practitioners a false sense of security, when in fact their models are failing on important slices not recovered by Domino. It is critical that practitioners still run standard evaluations on accurately-labeled, representative test sets in addition to using Domino for auditing models. Additionally, because Domino uses embeddings trained on image-text pairs sourced from the web, it may reflect societal biases when identifying and describing slices. Future work should explore the impacts of using biased embeddings to identify errors in models. What kinds of error modes might we miss? Are certain underrepresented groups or concepts less likely to be identified as an underperforming slice? + +# 10 CONTRIBUTIONS + +S.E., M.V., K.S., J.D., J.Z., and C.R. conceptualized the overall study. S.E., M.V., K.S., J.-B.D., J.D., J.Z., and C.R. contributed to the experimental design while S.E., M.V., K.S., and J.-B.D. wrote computer code and performed experiments. S.E. trained all models for CelebA and ImageNet, M.V. trained all models for MIMIC, K.S. trained all models for EEG, and J.-B.D generated ConVIRT embeddings for MIMIC. C.L.-M. provided the labeled EEG data and clinical expertise. S.E., M.V., and K.S. prepared the manuscript. All authors contributed to manuscript review. + +# 11 ACKNOWLEDGEMENTS + +We are thankful to Karan Goel, Laurel Orr, Michael Zhang, Sarah Hooper, Neel Guha, Megan Leszczynski, Arjun Desai, Priya Mishra, Simran Arora, Jure Leskovec, Zach Izzo, Nimit Sohoni, and Weixin Liang for helpful discussions and feedback. Sabri Eyuboglu is supported by the National Science Foundation Graduate Research Fellowship. Maya Varma is supported by graduate fellowship awards from the Department of Defense (NDSEG) and the Knight-Hennessy Scholars program at Stanford University. Khaled Saab is supported by the Stanford Interdisciplinary Graduate Fellowship with the Wu Tsai Neurosciences Institute. James Zou is supported by NSF CAREER 1942926. We gratefully acknowledge the support of NIH under No. U54EB020405 (Mobilize), NSF under Nos. CCF1763315 (Beyond Sparsity), CCF1563078 (Volume to Velocity), and 1937301 (RTML); ARL under No. W911NF-21-2-0251 (Interactive Human-AI Teaming); ONR under No. N000141712266 (Unifying Weak Supervision); ONR N00014-20-1-2480: Understanding and Applying Non-Euclidean Geometry in Machine Learning; N000142012275 (NEPTUNE); NXP, Xilinx, LETI-CEA, Intel, IBM, Microsoft, NEC, Toshiba, TSMC, ARM, Hitachi, BASF, Accenture, Ericsson, Qualcomm, Analog Devices, Google Cloud, Salesforce, Total, the HAI-GCP Cloud Credits for Research program, the Stanford Data Science Initiative (SDSI), and members of the Stanford DAWN project: Facebook, Google, and VMWare. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of NIH, ONR, or the U.S. Government. + +# REFERENCES + +Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity Checks for Saliency Maps. In S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, and R Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 9505-9515. Curran Associates, Inc., 2018. + +Marcus A Badgeley, John R Zech, Luke Oakden-Rayner, Benjamin S Glicksberg, Manway Liu, William Gale, Michael V McConnell, Bethany Percha, Thomas M Snyder, and Joel T Dudley. Deep learning predicts hip fracture using confounding patient and healthcare variables. npj Digital Medicine, 2(1):1-10, April 2019. +Atul Bansal, Ravinder Agarwal, and R K Sharma. Svm based gender classification using iris images. pp. 425-429, November 2012. +Alceu Bissoto, Michel Fornaciali, Eduardo Valle, and Sandra Avila. (de) constructing bias on skin lesion datasets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 0-0, 2019. +Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, pp. 77-91. PMLR, 2018. +Mayee Chen, Karan Goel, Nimit S Sohoni, Fait Poms, Kayvon Fatahalian, and Christopher Re. Mandoline: Model evaluation under distribution shift. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 1617-1629. PMLR, 18-24 Jul 2021. URL https://proceedings.mlr.press/v139/chen21i.html. +Yeounoh Chung, Tim Kraska, Neoklis Polyzotis, Ki Hyun Tae, and Steven Euijong Whang. Slice finder: Automated data slicing for model validation. In 2019 IEEE 35th International Conference on Data Engineering (ICDE), pp. 1550-1553. ieeexplore.ieee.org, April 2019. +Terrance de Vries, Ishan Misra, Changhan Wang, and Laurens van der Maaten. Does object recognition work for everyone? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 52-59, 2019. +Alex J DeGrave, Jose Janizek, and Su-In Lee. AI for radiographic COVID-19 detection selects shortcuts over signal. Nature Machine Intelligence, 3(7):610-619, May 2021. +Jean-Benoit Delbrouck, Khaled Saab, Maya Varma, Sabri Eyuboglu, Jared A. Dunnmon, Pierre Chambon, Juan Manuel Zambrano, Akshay Chaudhari, and Curtis P. Langlotz. Vilmedic: a framework for research at the intersection of vision and language in medical ai. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics, May 2022. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, June 2009. +Greg d'Eon, Jason d'Eon, James R. Wright, and Kevin Leyton-Brown. The spotlight: A general method for discovering systematic errors in deep learning models, 2021. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805. +Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. +Christiane Fellbaum. WordNet: An Electronic Lexical Database. Bradford Books, 1998. +Jason A Fries, Paroma Varma, Vincent S Chen, Ke Xiao, Heliodoro Tejeda, Priyanka Saha, Jared Dunnmon, Henry Chubb, Shiraz Maskatia, Madalina Fiterau, Scott Delp, Euan Ashley, Christopher Ré, and James R Priest. Weakly supervised classification of aortic valve malformations using unlabeled cardiac MRI sequences. Nat. Commun., 10(1), December 2019. + +Karan Goel, Nazeneen Rajani, Jesse Vig, Samson Tan, Jason Wu, Stephan Zheng, Caiming Xiong, Mohit Bansal, and Christopher Ré. Robustness Gym: Unifying the NLP Evaluation Landscape. arXiv:2101.04840 [cs], January 2021. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. February 2015. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. +Yue He, Zheyan Shen, and Peng Cui. Towards non-i.i.d. image classification: A dataset and baselines. Pattern Recognit., 110:107383, February 2021. +Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. March 2019. +A Johnson, L Bulgarelli, T Pollard, S Horng, LA Celi, and R Mark. Mimic-iv (version 1.0), 2020. +Alistair E W Johnson, Tom J Pollard, Seth J Berkowitz, Nathaniel R Greenbaum, Matthew P Lungren, Chih-ying Deng, Roger G Mark, and Steven Horng. Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports. Scientific data, 6(1):1-8, 2019. +Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained image categorization. In First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011. +Michael P Kim, Amirata Ghorbani, and James Zou. Multiaccuracy: Black-Box Post-Processing for Fairness in Classification. arXiv:1805. 12317 [cs, stat], August 2018. +Diederik P Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs], January 2017. +Allison Koenecke, Andrew Nam, Emily Lake, Joe Nudell, Minnie Quartey, Zion Mengesha, Connor Toups, John R Rickford, Dan Jurafsky, and Sharad Goel. Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences, 117(14):7684-7689, 2020. +Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton A Earnshaw, Imran S Haque, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. WILDS: A Benchmark of in-the-Wild Distribution Shifts. arXiv:2012. 07421 [cs], March 2021. +Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big transfer (bit): General visual representation learning. December 2019. +Andrey Kuehlkamp, Benedict Becker, and Kevin Bowyer. Gender-from-iris or gender-from-mascara? February 2017. +Weixin Liang and James Zou. Metadata: A dataset of datasets for evaluating distribution shifts and training conflicts. In ICML2021 ML4data Workshop, 2021. +Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. +Luke Oakden-Rayner, Jared Dunnmon, Gustavo Carneiro, and Christopher Ré. Hidden stratification causes clinically meaningful failures in machine learning for medical imaging. September 2019. +Laurel Orr, Megan Leszczyński, Simran Arora, Sen Wu, Neel Guha, Xiao Ling, and Christopher Re. Bootleg: Chasing the tail with self-supervised named entity disambiguation. October 2020. +Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. February 2021. + +Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should i trust you?": Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, pp. 1135-1144, New York, NY, USA, 2016. Association for Computing Machinery. ISBN 9781450342322. doi: 10.1145/2939672.2939778. URL https://doi.org/10.1145/2939672.2939778. +Subhrajit Roy, Isabell Kiral-Kornek, and Stefan Harrer. Chrononet: a deep recurrent neural network for abnormal EEG identification. In Conference on Artificial Intelligence in Medicine in Europe, pp. 47-56. Springer, 2019. +Khaled Saab, Jared Dunnmon, Christopher Ré, Daniel Rubin, and Christopher Lee-Messer. Weak supervision as an efficient approach for automated seizure detection in electroencephalography. npj Digital Medicine, 2020. URL https://www.nature.com/articles/s41746-020-0264-0#article-info. +Svetlana Sagadeeva and Matthias Boehm. Slicing: Fast, linear-algebra-based slice finding for ml model debugging. In Proceedings of the 2021 International Conference on Management of Data, pp. 2290-2299. Association for Computing Machinery, New York, NY, USA, June 2021. +Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization. arXiv:1911.08731 [cs, stat], April 2020. +Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017. +Ilya Semenov and Shamsul Arefin. Wikipedia word frequency. https://github.com/IlyaSemenov/wikipedia-word-frequency, 2019. +Sahil Singla, Besmira Nushi, Shital Shah, Ece Kamar, and Eric Horvitz. Understanding failures of deep networks via robust feature extraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12853-12862, 2021. +Akshay Smit, Saahil Jain, Pranav Rajpurkar, Anuj Pareek, Andrew Ng, and Matthew Lungren. Combining automatic labelers and expert annotations for accurate radiology report labeling using BERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1500-1519, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.117. URL https://aclanthology.org/2020.emnlp-main.117. +Nimit Sohoni, Jared Dunnmon, Geoffrey Angus, Albert Gu, and Christopher Ré. No subclass left behind: Fine-grained robustness in coarse-grained classification problems. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 19339-19352. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/e0688d13958a19e087e123148555e4b4-Paper.pdf. +Juan E Tapia, Claudio A Perez, and Kevin W Bowyer. Gender classification from the same iris code used for recognition. IEEE Trans. Inf. Forensics Secur., 11(8):1760-1770, August 2016. +Burak Uzkent, Evan Sheehan, Chenlin Meng, Zhongyi Tang, Marshall Burke, David Lobell, and Stefano Ermon. Learning to interpret satellite images using wikipedia. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, 2019. +Maya Varma, Laurel Orr, Sen Wu, Megan Leszczyński, Xiao Ling, and Christopher Ré. Cross-domain data integration for named entity disambiguation in biomedical text. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pp. 4566-4575, Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. URL https://aclanthology.org/2021-findings-emnlp.388. + +Julia K Winkler, Christine Fink, Ferdinand Toberer, Alexander Enk, Teresa Deinlein, Rainer Hofmann-Wellenhof, Luc Thomas, Aimilios Lallas, Andreas Blum, Wilhelm Stolz, and Holger A Haenssle. Association Between Surgical Skin Markings in Dermoscopic Images and Diagnostic Performance of a Deep Learning Convolutional Neural Network for Melanoma Recognition. JAMA Dermatol., 155(10):1135, October 2019. +Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. On completeness-aware concept-based explanations in deep neural networks. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 20554-20565. Curran Associates, Inc., 2020a. URL https://proceedings.neurips.cc/paper/2020/file/ecb287ff763c169694f682af52c1f309-Paper.pdf. +Chih-Kuan Yeh, Been Kim, Sercan O Arik, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. On Completeness-aware Concept-Based Explanations in Deep Neural Networks. arXiv:1910.07969 [cs, stat], June 2020b. +John R Zech, Marcus A Badgeley, Manway Liu, Anthony B Costa, Joseph J Titano, and Eric Karl Oermann. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS medicine, 15(11):e1002683, 2018. +Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating Unwanted Biases with Adversarial Learning. Association for the Advancement of Artificial Intelligence (AAAI), January 2018. +Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D. Manning, and Curtis P. Langlotz. Contrastive learning of medical visual representations from paired images and text. CoRR, abs/2010.00747, 2020. URL https://arxiv.org/abs/2010.00747. + +# A APPENDIX + +# CONTENTS + +A.1 Examples of Slice Descriptions 16 +A.2 Extended Related Work: Survey of Slices in the Wild 18 +A.3 Extended Description of Evaluation Framework 19 + +A.3.1 Extended Description of Slice Categories 19 +A.3.2 SDM Implementations 20 +A.3.3 Extended Description of Trained Models 21 +A.3.4 Extended Description of Synthetic Models 21 + +A.4 Extended Description of Domino 21 + +A.4.1 Extended Description of Cross-Modal Embeddings 21 +A.4.2 Extended Description of Error-Aware Mixture Model 23 +A.4.3 Generating a Corpus of Natural Language Descriptions 24 + +A.5 Extended Results 25 + +A.5.1 Extended Analysis of Cross-Modal Embeddings 25 +A.5.2 Extended Analysis of Error-Aware Mixture Model 25 +A.5.3 Extended Analysis of Natural Language Descriptions 26 +A.5.4 Future Work 26 + +A.6 Glossary of Notation 27 + +# A.1 EXAMPLES OF SLICE DESCRIPTIONS + +In this section, we provide examples of the natural language slice descriptions generated by Domino. Figure 5 includes natural language descriptions and representative photos for discovered slices in the natural image domain, Figure 6 includes natural language descriptions in the medical image domain, and Figure 7 includes natural language descriptions in the medical time-series domain. + +![](images/a8f4d60a733aeab7910319e9fd563ad3f12307f7a854172b0d177d6bbbed5965.jpg) +Figure 5: Domino produces natural language descriptions of discovered slices. Natural language descriptions for discovered slices in (top row) 3 settings randomly selected from the set of the 85 rare slice, natural image settings where Domino includes the exact name of the slice in its top 5 slice descriptions; (middle row) 3 settings randomly selected from the set of the 45 correlation slice, natural image settings where Domino includes the exact name of the slice in its top 5 slice descriptions and precision-at-25 exceeds 0.8; and (bottom row) 3 settings randomly selected from the set of the 95 noisy label slice, natural image settings where Domino includes the exact name of the slice in its top 5 slice descriptions and precision-at-25 exceeds 0.8. The length of the bars beneath each description are proportional to the dot product score for the description (see Section 5.3). Also shown are the top 3-4 images that Domino associates with the discovered slice. + +# Synthetic Model + +cardiac condition + +rare slice ca + +Cardiomegaly + +...mild cardiomegaly... + +...heart is mildly enlarged... +...the heart remains mildly enlarged... +...heart size remains moderately enlarged... +...the heart is mildly enlarged... + +pleural effusion + +correlation + +support devices + +dialysis catheter terminates + +...enteric tube is again noted.. +...sternotomy wires are noted. +...a dual lead pacemaker... + +...right-sided port-a-cath is again seen.... + +lung condition + +noisy label + +atelectasis + +...there is trace bibasilar atelectasis... +...with overlying atelectasis... +...with overlying atelectasis... +...felt to most likely represent atelectasis... +...mild right basilar atelectasis... + +# Trained Model + +pleural condition + +pneumothorax + +...there is no...pneumothorax... + +...there is no focal consolidation or pneumothorax... + +there is no...effusion, or pneumothorax... + +.no evidence of pneumothorax.. + +...no pleural effusion or pneumothorax.... + +lung opacity + +pleural effusion + +...improving pleural effusion on the right... + +.basal pleural effusions are noted... + +small bilateral pleural effusions + +...and absence of pleural effusions... + +there is no significant interval change... + +lung condition + +atelectasis + +mild bibasilar atelectasis slightly improved... +there is mild left basal atelectasis. +probable bibasilar atelectasis.. +.mild bibasilar atelectasis... +...likely reflect areas of atelectasis... +is slightly improved... +selectasis. +ctasis... +s... +electasis... + +correlation + +# + +are noted... +fusions are persistent... +effusions... +interval change... + +noisy label + +slice category + +Figure 6: Domino produces natural language descriptions for discovered slices in a medical image dataset. Here, we provide the top five natural language descriptions for discovered slices in (top row) two rare slice settings, (middle row) two correlation slice settings, and (bottom row) two noisy label slice settings. Colored bars represent accurate slice descriptions, and gray bars represent incorrect descriptions. Note that in our medical examples, the vocabulary consists of physician reports in the training set; since, we are unable to provide the full-text reports due to patient privacy concerns, the figure includes relevant fragments of reports. The length of the bars beneath each description are proportional to the dot product score for the description (see Section 5.3) + +seizure + +age + +...25 year old healthy man... + +...24 hour video-EEG recording.. +...correlating seizures noted... +...ex-term female with seizure... +...7 y/o ex-term male with... + +rare + +seizure + +age + +...15 month old baby. + +...intractable epilepsy.. +...2 year old boy. +..tonic clonic movements... +..awake/asleep EEG.. + +correlation + +seizure + +age + +...15 month old baby... + +...intractable epilepsy.. +...17 yo girl ex-35 weekday with IVH... +...tonic clonic movements... +...a seizure lasting less than a minute... +。 +with IVH... +. +an minute... + +noisy label + +$\therefore m = \frac{3}{11}$ + +Figure 7: Domino produces natural language explanations for discovered slices in a medical time-series dataset. Natural language descriptions for discovered slices in 3 of our EEG slice discovery settings. In all three settings, the model is trained to detect seizures and underperforms on the slice of young patients. The three settings span all three slice categories and in each Domino describes the slice with reports mentioning young age. Note that the description corpus consists of physician reports in the training set; however, we are unable to provide the full-text reports due to patient privacy concerns, so the table includes only relevant fragments. The bar lengths beneath each description are proportional to the dot product score for the description (see Section 5.3) + +# A.2 EXTENDED RELATED WORK: SURVEY OF SLICES IN THE WILD + +Several recent studies have shown that machine learning models often make systematic errors on critical data slices. In this section, we provide a survey of underperforming slices documented in the literature. + +- Skin Lesion Classification (Correlation Slice): Bissoto et al. (2019) reveal that models trained to classify skin lesion images depend on clinically irrelevant information due to biases in the training data. Specifically, a computer vision model trained to classify skin lesions performs poorly on images of malignant lesions with color charts (i.e. colored bandages), since color charts more commonly appear with benign lesions. +- Melanoma Detection (Correlation Slice): A study by Winkler et al. (2019) showed that melanoma detection models trained on dermascopic image datasets often rely on the presence of surgical skin markings when making predictions. Since dermatologists mark suspicious lesions during clinical practice with a gentian violet surgical skin marker, dermascopic images will often include skin markings, causing models to learn spurious correlations between markings and the presence of melanoma. Models then underperform on the slice of lesions without markings. +- Pneumothorax Detection (Correlation Slice): Models trained to detect the presence of pneumothorax (collapsed lungs) have been found to rely on the presence of chest drains, a device used during treatment (Oakden-Rayner et al., 2019). +- Hip-Fracture Detection (Rare Slice): Due to the low prevalence of cervical fractures in a pelvic X-ray dataset collected from the Royal Adelaide Hospital, computer vision models trained to detect fractures underperform on this slice (Oakden-Rayner et al., 2019). +- Hip-Fracture Detection (Correlation Slice): Badgeley et al. (2019) show that the performance of models trained to detect hip fractures from X-rays are sensitive to multiple patient-specific and hospital-specific attributes. In particular, when the test distribution is subsampled to remove correlations with patient attributes (age, gender, BMI, pain, fall) and hospital attributes (scatterer, department, radiation, radiologist name, order time), the model performance drops to close to random. +- Gender Classification in Images (Rare Slice): Buolamwini & Gebru (2018) demonstrated that facial analysis datasets are often composed primarily of lighter-skinned subjects (based on Fitzpatrick Skin Types) and as a result, three commercial gender classification systems systematically underperform on rare subgroups (e.g. darker faces, female faces). +- COVID-19 Detection in Chest X-rays (Correlation Slice): DeGrave et al. (2021) reveal that some models trained to detect COVID-19 from radiographs do not generalize to datasets from external hospitals, indicating that the models rely on source-specific attributes instead of pathology markers. +- Pneumonia Detection in Chest X-rays (Correlation Slice): Zech et al. (2018) evaluated pneumonia screening CNNs on three external hospitals, and found that performance on the external hospitals was significantly lower than the original hospital dataset. Additionally, the CNNs were able to very accurately classify the hospital system and department where the radiographs were acquired, indicating that the CNN features had learned hospital-specific confounding variables. +- Weakly Supervised Aortic Valve Malformation Classification (Noisy Label Slice): Weak supervision is commonly used in medical machine learning practice to label clinical datasets. Using a set of labeling functions, Fries et al. (2019) train a weakly-supervised model to classify aortic valve malformations. They note that noise in the labels is induced by labeling functions. These may systematically miss coherent slices of data, meaning that a model trained on these labels may underperform on those slices. +- Predicting Gender from Photos of the Iris (Correlation Slice): Several studies have reported models capable of predicting a person's gender from a photo of their iris Bansal et al. (2012); Tapia et al. (2016). However, Kuehlkamp et al. (2017) show that these models may be relying on the presence of mascara to make these predictions. This suggests that performance will likely be degraded on the slice of females without mascara and males with mascara. + +- Speech Recognition (Rare Slice): Koenecke et al. (2020) demonstrated that automated speech recognition systems have large performance disparities between white and African American speakers. The disparities were traced to the race gap in the corpus used to train the model, indicating that African American speakers were a rare slice. +- Object Recognition (Rare Slice): A study by de Vries et al. (2019) demonstrated that publicly available object recognition algorithms often systematically underperform on household items that commonly occur in non-Western countries and low-income communities. This is likely due to objects appearing in different environments as well as differences in object appearance. +- Named Entity Disambiguation (Rare Slice): Named entity disambiguation (NED) systems, which map textual mentions to structured entities, play a critical role in automated text parsing pipelines. Several studies have demonstrated that NED systems underperform on rare entities that occur infrequently in the training data (Orr et al., 2020; Varma et al., 2021). + +# A.3 EXTENDED DESCRIPTION OF EVALUATION FRAMEWORK + +
DomainRare SettingsCorrelation SettingsNoisy Label SettingsTotal settings
Natural Images177 (trained)520 (trained)287 (trained)984 (trained)
197 (synthetic)520 (synthetic)394 (synthetic)1,111 (synthetic)
MIMIC15 (trained)176 (trained)30 (trained)221 (trained)
55 (synthetic)352 (synthetic)55 (synthetic)462 (synthetic)
EEG10 (trained)10 (trained)10 (trained)30 (trained)
10 (synthetic)10 (synthetic)10 (synthetic)30 (synthetic)
+ +Table 1: Overview of evaluation framework. We evaluate SDMs across 1,235 (trained) slice discovery settings across three domains, four slice categories, and five slice parameters $(\alpha)$ . + +# A.3.1 EXTENDED DESCRIPTION OF SLICE CATEGORIES + +In Section 4.1.1, we describe how we categorize each slice discovery setting based on the underlying reason that the model $h_{\theta}$ exhibits degraded performance on the slices S. We survey the literature for examples of underperforming slices in the wild, which we document in Section A.2. Based on our survey and prior work (Oakden-Rayner et al., 2019), we identify three popular slice types. We provide expanded descriptions below. + +Rare slice. Consider a slice $S \in \{0,1\}$ (e.g. patients with a rare disease, photos taken at night) that occurs infrequently in the training set (i.e. $P(S = 1) < \alpha$ for some small $\alpha$ ). Since a rare slice will not significantly affect model loss during training, the model may fail to learn to classify examples within the slice. To generate settings with rare slices, we use base datasets with hierarchical label schema (e.g. ImageNet (Deng et al., 2009)). We construct dataset $\mathcal{D}$ such that for a given class label $Y$ , the elements in subclass $C$ occur with proportion $\alpha$ , where $\alpha$ ranges between 0.01 and 0.1. + +Correlation slice. If the target variable $Y$ (e.g. pneumothorax) is correlated with another variable $C$ (e.g. chest tubes), the model may learn to rely on $C$ to make predictions. This will induce a slice $S = 1[C \neq Y]$ (e.g. pneumothorax without chest tubes and normal with chest tubes) with degraded performance. To generate settings with correlation slices, we use base datasets with metadata annotations (e.g. CelebA (Liu et al., 2015)). We sub-sample the base dataset $\mathcal{D}_{\mathrm{base}}$ such that the resulting dataset $\mathcal{D}$ exhibits a linear correlation of strength $\alpha$ between the target variable and another metadata label $C$ . Here, $\alpha$ ranges between 0.2 and 0.8. + +We now describe our procedure for subsampling the base dataset to achieve a desired correlation. Assume we have two binary variables $Y, C \in \{0,1\}$ (Bernoulli random variables) and a dataset of $\mathcal{D}_{\mathrm{base}} = \{(y_i, c_i)\}_{i=1}^{n_{\mathrm{base}}}$ . Given a target correlation $\alpha$ , we would like to subsample the dataset $\mathcal{D}_{\mathrm{base}}$ such that the resulting dataset $D = (y_i, c_i)_{i=1}^n$ of size $n$ exhibits a sample correlation between $Y$ and $C$ of $\alpha$ . + +The population correlation between $Y$ and $C$ is given by $\alpha(Y, C) = \frac{\operatorname{cov}(Y, C)}{\sigma_Y \sigma_C}$ and $\operatorname{cov}(Y, C) = \mathbb{E}[YC] - E[Y]E[C]$ . For a sample, the unbiased estimator of the covariance is: + +$$ +\operatorname {c o v} (\mathbf {y}, \mathbf {c}) = \frac {1}{n - 1} \sum_ {i = 1} ^ {n} (y _ {i} - \bar {y}) (c _ {i} - \bar {c}) = \frac {1}{n - 1} \left(\sum_ {i = 1} ^ {n} y _ {i} c _ {i} - \bar {c} \bar {y}\right) +$$ + +Since we know $Y$ and $C$ are Bernoulli random variables, we can express this in terms of variables like $n_{y=1,c=1}$ (i.e. the number of samples $i$ where $y_i = 1$ and $c_i = 1$ ) and $n_{y=1}$ (i.e. the number of samples $i$ where $y_i = 1$ ). + +$$ +\operatorname {c o v} (\mathbf {y}, \mathbf {c}) = \frac {1}{n - 1} \left(n _ {y = 1, c = 1} - \frac {n _ {y = 1} n _ {c = 1}}{n}\right) +$$ + +The correlation coefficient can then be expressed in terms of this covariance and the sample standard deviations $s_y$ and $s_c$ : + +$$ +r = \frac {n _ {y = 1 , c = 1} - \frac {n _ {y = 1} n _ {c = 1}}{n}}{(n - 1) s _ {y} s _ {c}} +$$ + +In addition to supplying a target correlation $\alpha$ , assume we also have target means $\mu_{a}$ and $\mu_{b}$ (these could be the sample means in the original dataset $\mathcal{D}$ for example) and a target sample size $n$ . Since Y and C are Bernoulli random variables, we can compute sample standard deviations as $s_{y} = \mu_{a}(1 - \mu_{a})$ and $s_{c} = \mu_{b}(1 - \mu_{b})$ . We can then derive simple formulas for computing the desired values needed to properly subsample the data: + +$$ +n _ {y = 1} = \mu_ {a} n +$$ + +$$ +n _ {c = 1} = \mu_ {b} n +$$ + +$$ +n _ {y = 1, c = 1} = \alpha (n - 1) s _ {y} s _ {c} + \frac {n _ {y = 1} n _ {c = 1}}{n} +$$ + +$$ +n _ {y = 1, c = 0} = n _ {y = 1} - n _ {y = 1, c = 1} +$$ + +$$ +n _ {y = 0, c = 1} = n _ {c = 1} - n _ {y = 1, c = 1} +$$ + +$$ +n _ {y = 0, c = 0} = n - \left(n _ {y = 1} + n _ {c = 1} - n _ {y = 1, c = 1}\right) +$$ + +Noisy label slice. Errors in labels are not always distributed uniformly across the training distribution. A slice of data $S \in \{0,1\}$ may exhibit higher label error rates than the rest of the training distribution. This could be due to a number of different factors: classes may be ambiguous (e.g. annotators labeling sandwiches may disagree whether to include hot dogs), labeling heuristics may fail on important slices (e.g. medical imaging heuristics that only activate on images from one scanner type), or human annotators may lack expertise on certain subsets (e.g. annotators from one country labeling street signs in another). A model $h_{\theta}$ trained on these labels will likely exhibit degraded performance on $S$ . To generate settings with noisy label slices, we can use a base dataset with metadata annotations (e.g. CelebA (Liu et al., 2015)). We construct dataset $D$ such that for each class label $Y$ , the elements in subclass $C$ exhibit label noise with probability $\alpha$ , where $\alpha$ ranges between 0.01 and 0.3. + +# A.3.2 SDM IMPLEMENTATIONS + +Spotlight. d'Eon et al. (2021) search for an underperforming slice by maximizing the expected loss with respect to a multivariate Gaussian with spherical covariance. We use the authors' implementation provided at https://github.com/gregdeon/spotlight. We enforce a minimum spotlight size equal to $2\%$ of the validation data as recommended in the paper. We use a initial learning rate of $1 \times 10^{-3}$ (the default in the implementation) and apply the same annealing and barriers as the authors. We optimize for 1,000 steps per spotlight (the default in the implementation). + +GEORGE. Sohoni et al. (2020) propose identifying underperforming slices by performing class conditional clustering on a U-MAP reduction of the embeddings. We use the implementation provided at https://github.com/HazyResearch/hidden-stratification. + +Multiaccuracy Boost. To identify slices where the model $h_\theta$ is systematically making mistakes, Kim et al. (2018) use ridge regression to learn a function $f: \mathbb{R}^d \to \mathbb{R}_+$ mapping from an example's embedding $z_i \in \mathbb{R}^d$ to the partial derivative of the cross entropy loss with respect to the prediction + +$$ +\frac {\partial \ell \left(h _ {\theta} \left(x _ {i}\right) , y _ {i}\right)}{\partial h _ {\theta} \left(x _ {i}\right)} = \frac {1}{1 - h _ {\theta} \left(x _ {i}\right) - y _ {i}}. \tag {2} +$$ + +Because this function grows as the absolute value of the residual $|h_{\theta}(x_i) - y_i|$ grows, a good $f$ should correlate with the residual. + +In order to discover multiple distinct slices, the authors repeat this process $\hat{k}$ times updating the model predictions on each iteration according to + +$$ +h _ {\theta} ^ {(j + 1)} \left(x _ {i}\right) \propto e ^ {- \eta f \left(z _ {i}\right)} h _ {\theta} ^ {(j)} \left(x _ {i}\right), \tag {3} +$$ + +where $\eta$ is a hyperparameter defining the step size for the update. + +We use an implementation of Multiaccuracy Boost based on the authors', which was released at https://github.com/amiratag/MultiAccuracyBoost. We use $\eta = 0.1$ as in the authors' implementation. We fit $f$ on $70\%$ of the validation data and use the remaining $30\%$ for evaluating the correlation with the residual. The authors use the same ratio in their implementation. + +Confusion SDM. A simple, embedding-agnostic way to identify underperforming subgroups is to simply inspect the cells of the confusion matrix. We include this important baseline to determine when more complicated slice discovery techniques are actually useful. + +# A.3.3 EXTENDED DESCRIPTION OF TRAINED MODELS + +In Section 4.1.2, we discuss training a distinct model $h_\theta$ for each slice discovery setting. In this section we provide additional details on model training. + +For our natural image settings and medical image settings, we used a ResNet-18 randomly initialized with He initialization (He et al., 2015; 2016). We applied an Adam optimizer with learning rate $1 \times 10^{-4}$ for 10 epochs and use early stopping using the validation dataset (Kingma & Ba, 2017). During training, we randomly crop each image, resize to $224 \times 224$ , apply a random horizontal flip, and normalize using ImageNet mean and standard deviation ( $\mu = [0.485, 0.456, 0.406]$ , $\sigma = [0.229, 0.224, 0.225]$ ). During inference, we resize to $256 \times 256$ , apply a center crop of size $224 \times 224$ and normalize with the same mean and standard deviation as in training. + +For our medical time series settings, we use a densely connected inception convolution neural network (Roy et al., 2019) randomly initialized with He initialization (He et al., 2015; 2016). Since the EEG signals are sampled at $200\mathrm{Hz}$ , and the EEG clip length is 12 seconds, with 19 EEG electrodes, the input EEG has shape $19 \times 2400$ . The models are trained with a learning rate of $10^{-6}$ and a batch size of 16 for 15 epochs. + +# A.3.4 EXTENDED DESCRIPTION OF SYNTHETIC MODELS + +In Section 4.1.2, we discuss how synthesizing model predictions in order to provide greater control over the evaluation. Here, we provide additional details describing this process. + +Assume a binary class label $Y \in \{0,1\}$ and a slice variable $S \in \{0,1\}$ . We sample the predicted probability $\hat{Y} \in [0,1]$ from one of four beta distributions, conditional on $Y$ and $S$ . The parameters of those four beta distributions are set so as to satisfy a desired specificity and sensitivity in the slice (i.e. when $S = 1$ ) and out of the slice (i.e. when $S = 0$ ). + +For natural image settings, we set both specificity and sensitivity to 0.4 in the slice and 0.75 out of the slice. For medical image and time series settings, we set both specificity and sensitivity to 0.4 in the slice and 0.8 out of the slice. + +# A.4 EXTENDED DESCRIPTION OF DOMINO + +# A.4.1 EXTENDED DESCRIPTION OF CROSS-MODAL EMBEDDINGS + +Here, we provide implementation details for the four cross-modal embeddings used in this work: CLIP, ConVIRT, MIMIC-CLIP, and EEG-CLIP. Domino relies on the assumption that we have ac + +cess to either (a) pretrained cross-modal embedding functions or (b) a dataset with paired input-text data that can be used to learn embedding functions. + +Large-scale pretrained cross-modal embedding functions can be used to generate accurate representations of input examples. For instance, if our inference dataset consists of natural images, we can use a pre-trained CLIP model as embedding functions $g_{\mathrm{input}}$ and $g_{\mathrm{text}}$ to obtain image embeddings that lie in the same latent representation space as word embeddings. + +However, pre-trained cross-modal embeddings are only useful if the generated representations accurately represent the inputs in the inference dataset. For example, if the inference dataset consists of images from specialized domains (i.e. x-rays) or non-image inputs, CLIP is likely to generate poor representations. + +If pre-trained cross-modal embeddings are not available or cannot effectively represent the inference dataset, we require access to a separate dataset that can be used to learn the cross-modal embedding functions $g_{\mathrm{input}}$ and $g_{\mathrm{text}}$ ; we will refer to this dataset as the CM-training dataset for the remainder of this work. The CM-training dataset must consist of paired input-text data (e.g. image-caption pairs or radiograph-report pairs). Further, we assume that the text data provides a sufficient description of the input and includes information about potential correlates, such as object attributes or subject demographics. Note that paired input-text data is only required for the CM-training dataset; we make no such assumptions about the slice discovery dataset. + +We use the following four cross-modal embeddings in our analysis: + +- CLIP (Natural Images): CLIP embeddings are cross-modal representations generated from a large neural network trained on 400 million image-text pairs (Radford et al., 2021). + +- ConVIRT (Medical Images): ConVIRT embeddings are generated from pairs of chest X-rays and radiologist reports in the MIMIC-CXR dataset. We create a CM-training set with $70\%$ of the subjects in MIMIC-CXR, ensuring that no examples in our training set occur in validation or test data at slice discovery time. We then replicate the training procedure detailed in Zhang et al. (2020), which uses contrastive learning to align embeddings. We use the implementation provided in ViLMedic (Delbrouck et al., 2022). + +- MIMIC (Medical Images): We generate a separate set of cross-modal embeddings for the MIMIC-CXR dataset using a variant of the CLIP training procedure (Radford et al., 2021). We use the same CM-training set as detailed above, with 89,651 image and report pairs. In order to generate text representations, we extract the findings and impressions sections from radiologist reports, which then serve as input to a BERT-based transformer initialized with weights from CheXBert and then frozen (Smit et al., 2020). Chest x-rays are passed as input to a visual transformer (ViT) pre-trained on ImageNet-21k and ImageNet 2012 at an image resolution of $224 \times 224$ (Dosovitskiy et al., 2021). All images are resized to $224 \times 224$ and normalized using mean and standard deviation values from ImageNet. Text representations are extracted from the output of the [CLS] token and image representations are extracted from the output of the final model layer. Image representation and text representations are separately passed through projection layers consisting of fully-connected layers and nonlinear activation functions as detailed in an open-source implementation of $\mathrm{CLIP}^3$ . Finally, we align text and image representations using the InfoNCE loss function with in-batch negatives as detailed in Radford et al. (2021). We train our implementation for 30 epochs with a learning rate of $10^{-4}$ , a batch size of 64, and an embedding dimension of 256. The training process comes to an early stop if the loss fails to decrease for ten epochs. + +- EEG (Medical Time-Series Data): In order to generate cross-modal embeddings for EEG readings and associated neurologist reports, we modify the CLIP training procedure to work with time-series data (Radford et al., 2021; Saab et al., 2020). We create a CM-training set with 6,254 EEG signals and associated neurologist reports. We use the same EEG encoder described in Section A.3.3. In order to represent text representations, we extract the findings and narrative from the reports, which are then served as input to a BERT-based transformer initialized with weights from CheXBert (Smit et al., 2020). Text and EEG representations are aligned using the InfoNCE loss function with in-batch negatives as described in (Radford et al., 2021). We also add a binary cross-entropy classification loss + +for seizure classification, where we weigh the InfoNCE loss by 0.9, and the cross-entropy loss by 0.1. The cross-modal model is trained with a learning rate of $10^{-6}$ , an embedding dimension of 128, and a batch size of 32 for 200 epochs. + +# A.4.2 EXTENDED DESCRIPTION OF ERROR-AWARE MIXTURE MODEL + +In Section 5.2, we describe a mixture model that jointly models the input embeddings, class labels, and model predictions. Here, we provide an expanded description of the model and additional implementation details. + +Motivation and relation to prior work. Recall from Section 3 that our goal is to find a set of $\hat{k}$ slicing functions that partition our data into subgroups. This task resembles a standard unsupervised clustering problem but differs in an important way: we are specifically interested in finding clusters where the model makes systematic prediction errors. It is not immediately obvious how this constraint can be incorporated into out-of-the-box unsupervised clustering methods, such as principal component analysis or Gaussian mixture models. One potential approach would be to calculate the model loss on each example $x_{i}$ in $D_{v}$ , append each loss value to the corresponding embedding $g_{\mathrm{input}}(x_i)$ , and cluster with standard methods (Sohoni et al., 2020). Empirically, we find that this approach often fails to identify underperforming slices, likely because the loss element is drowned out by the other dimensions of the embedding. + +Recently, d'Eon et al. (2021) proposed Spotlight, an algorithm that searches the embedding space for contiguous regions with high-loss. Our error-aware mixture model is inspired by Spotlight, but differs in several important ways: + +- Our model partitions the entire space, finding both high and low performing slices. In contrast, Spotlight searches only for regions with high loss. Spotlight introduces the "spotlight size" hyperparameter, which lower bounds the number of examples in the slice and prevents Spotlight from identifying very small regions. +- Because we model both the class labels and predictions directly, our error-aware mixture model tends to discover slices that are homogeneous with respect to error type. On the other hand, Spotlight's objective is based on the loss, a function of the labels and predictions that makes false positives and false negatives indistinguishable (assuming cross-entropy). +- d'Eon et al. (2021) recommend using a spherical covariance matrix, $\Sigma = aI$ with $a \in R$ , when using Spotlight because it made their Adam optimization much faster and seemed to produce good results. In contrast, we use a diagonal covariance matrix of the form $\Sigma = \mathrm{diag}(a_1, a_2, \ldots, a_{\hat{k}})$ . Fitting our mixture model with expectation maximization remains tractable even with these more flexible parameters. + +Additional implementation details. The mixture model's objective encourages both slices with a high concentration of mistakes as well as slices with a high concentration of correct predictions. However, in the slice discovery problem described in Section 3, the goal is only to identify slices of mistakes (i.e. slices that exhibit degraded performance with respect to some model $h_{\theta}$ ). To reconcile the model's objective with the goal of slice discovery, we model $\bar{k} > \hat{k}$ slices and then select $\hat{k}$ slices with the highest concentrations of mistakes. Specifically, we model $\bar{k} = 25$ slices and return the top $\hat{k}$ clusters with the largest absolute difference between $\hat{\mathbf{p}}$ and $\mathbf{p}$ , $\sum_{i=1}^{c} |\hat{p}_i - p_i|$ . + +In practice, when $d$ is large (e.g. $d > 256$ ), we first reduce the dimensionality of the embedding to $d = 128$ using principal component analysis, which speeds up the optimization procedure significantly. + +We include an important hyperparameter $\gamma \in \mathbb{R}_+$ that balances the importance of modeling the class labels and predictions against the importance of modeling the embedding. The log-likelihood over $n$ examples is given as follows and maximized using expectation-maximization: + +$$ +\ell (\phi) = \sum_ {i = 1} ^ {n} \log \sum_ {j = 1} ^ {\bar {k}} P (S ^ {(j)} = 1) P (Z = z _ {i} | S ^ {(j)} = 1) P (Y = y _ {i} | S ^ {(j)} = 1) ^ {\gamma} P (\hat {Y} = h _ {\theta} (x _ {i}) | S ^ {(j)} = 1) ^ {\gamma} \tag {4} +$$ + +In practice, this hyperparameter allows users to balance between the two desiderata highlighted in Section 1. When $\gamma$ is large (i.e. $\gamma > 1$ ), the mixture model is more likely to discover underper + +forming slices, potentially at the expense of coherence. On the other hand, when $\gamma$ is small (i.e. $0 \leq \gamma < 1$ ), the mixture model is more likely to discover coherent slices, though they may not be underperforming. In our experiments, we set $\gamma = 10$ . We encourage users to tweak this parameter as they explore model errors with Domino, decreasing it when the discovered slices seem incoherent and increasing it when the discovered slices are not underperforming. + +We initialize our mixture model using a scheme based on the confusion matrix. Typically, the parameters of a Gaussian mixture model are initialized with the centroids of a k-means fit. However, in our error-aware mixture model, we model not only embeddings (as Gaussians), but also labels and predictions (as categoricals). It is unclear how best to combine these variables into a single k-means fit. We tried initializing by just applying k-means to the embeddings, but found that this led to slices that were too heterogeneous with respect to error type (e.g. a mix of false positives and true positives). Instead, we use initial clusters where almost all of the examples come from the same cell in the confusion matrix. Formally, at initialization, each slice $j$ is assigned a $y^{(j)} \in \mathcal{V}$ and $\hat{y}^{(j)} \in \mathcal{V}$ (i.e. each slice is assigned a cell in the confusion matrix). This is typically done in a round-robin fashion so that there are at least $\lfloor \bar{k} / |\mathcal{V}|^2 \rfloor$ slices assigned to each cell in the confusion matrix. Then, we fill in the initial responsibility matrix $Q \in \mathbb{R}^{n \times \bar{k}}$ , where each cell $Q_{ij}$ corresponds to our model's initial estimate of $P(S^{(j)} = 1 | Y = y_i, \hat{Y} = \hat{y}_i)$ . We do this according to + +$$ +\bar {Q} _ {i j} \leftarrow \left\{ \begin{array}{l l} 1 + \epsilon & y _ {i} = y ^ {(j)} \wedge \hat {y} _ {i} = \hat {y} ^ {(j)} \\ \epsilon & \text {o t h e r w i s e} \end{array} \right. \tag {5} +$$ + +$$ +Q _ {i j} \leftarrow \frac {\bar {Q} _ {i j}}{\sum_ {l = 1} ^ {\bar {k}} \bar {Q} _ {i l}} \tag {6} +$$ + +where $\epsilon$ is random noise which ensures that slices assigned to the same confusion matrix cell won't have the exact same initialization. We sample $\epsilon$ uniformly from the range $[0, E]$ where $E$ is a hyperparameter set to 0.001. + +# A.4.3 GENERATING A CORPUS OF NATURAL LANGUAGE DESCRIPTIONS + +In Section 5.3, we describe an approach for generating natural language descriptions of discovered slices. This approach uses cross-modal embeddings to retrieve descriptive phrases from a large corpus of text $\mathcal{D}_{\mathrm{text}}$ . In this section, we describe how we curate domain-specific corpora of descriptions to be used with Domino. + +First, we solicit a set of phrase templates from a domain expert. For example, since CelebA is a dataset of celebrity portraits, we use templates like: + +```txt +a photo of a person [MASK] [MASK] +a photo of a [MASK] woman +... +a [MASK] photo of a man +``` + +In our experiments with CelebA, we use a total of thirty templates similar to these (see GitHub for the full list). In our experiments with ImageNet we use only one template: a photo of [MASK]. + +Next, we generate a large number of candidate phrases by filling in the [MASK] tokens using either (1) a pretrained masked-language model (in our experiments with CelebA we use a BERT base model (Devlin et al., 2018) and generate 100,000 phrases, keeping the $n_{\mathrm{text}} = 10,000$ ) with the lowest loss) or (2) a programmatic approach (in our experiments with ImageNet, we create one phrase from each of the $n_{\mathrm{text}} = 10,000$ most frequently used words on English Wikipedia (Semenov & Arefin, 2019)). + +For our medical image and time-series datasets, we use corpora of physician reports sourced from MIMIC-CXR (Johnson et al., 2019) and Saab et al. (2020) with $n_{\mathrm{text}} = 159,830$ and $n_{\mathrm{text}} = 41,258$ , respectively. + +# A.5 EXTENDED RESULTS + +# A.5.1 EXTENDED ANALYSIS OF CROSS-MODAL EMBEDDINGS + +In Section 6.1, we explore the effect of embedding type on slice discovery performance. Here, we provide an extended evaluation of our results. + +We note that slice discovery performance is generally lower across rare slices when compared to correlation and noisy label slices. This trend is visible for both model types (synthetic and trained) and all embedding types (unimodal and cross-modal). This is likely due to the nature of the rare slice setting; since a rare subclass occurs in the dataset with very low frequency, it is difficult for SDMs to identify the error slice. + +Slice discovery performance on synthetic models is often higher than performance on trained models. This is expected because the use of synthetic models allows us to explicitly control performance on our labeled ground-truth slices, and as a result, Domino can more effectively recover the slice. On the other hand, trained models are likely to include underperforming, coherent slices that are not labeled as "ground-truth", which may limit the ability of Domino to recover the labeled slice. Trained models are also likely to exhibit lower slice performance degradations than synthetic models. We discuss these trade-offs in Section 4.1.2. + +# A.5.2 EXTENDED ANALYSIS OF ERROR-AWARE MIXTURE MODEL + +In Section 6.2, we explore how the choice of slicing algorithm affects slice discovery performance. Here, we provide an extended evaluation of our results. + +Additional experimental results with our error-aware mixture model are shown in Figure 8. + +![](images/c04399fea2395d5cd63d39be342ef9a7f8990c740c607d906d88a38471182a3f.jpg) +Figure 8: Error-aware mixture model enables accurate slice discovery. We show that when cross-modal embeddings are provided as input, our error-aware mixture model often outperforms previously-designed SDMs. + +We note that the naive Confusion SDM demonstrates high performance on correlation slices across all three datasets, even outperforming our error-aware mixture model in some cases. This finding suggests that when a strong correlation exists, simply inspecting the confusion matrix may be sufficient for effective slice discovery. + +Our error-aware mixture model demonstrates significantly higher performance on rare slices than prior SDMs; this is especially visible in trained model results. This is likely because our error-aware mixture model jointly models the input embeddings, class labels, and model predictions, allowing for better identification of rare slices when compared to existing methods. + +# A.5.3 EXTENDED ANALYSIS OF NATURAL LANGUAGE DESCRIPTIONS + +Please refer to Figure 9 for a quantitative evaluation of natural image descriptions. + +![](images/7a059657a150539d9e9670da7221590ce9b97326e01ddcac214c71c665ee4c7f.jpg) +Figure 9: Descriptions of discovered slices align with the names of the ground truth slices. Here, we show the fraction of natural image settings where Domino includes the exact name of the ground truth slice (or one of its WordNet synonyms (Fellbaum, 1998)) in the top- $k$ slice descriptions. + +# A.5.4 FUTURE WORK + +Based on our analysis of model trends, we identify several directions for future work. First, we observe that slice discovery is particularly difficult when the strength of the slice $(\alpha)$ is low. In the future, we aim to explore strategies for improving slice discovery and explanation generation in this scenario. Additionally, we hope to explore strategies for generating informative input embeddings when access to paired input-text data is limited. Finally, we intend to run controlled user studies in order to understand when explanations generated by Domino are actionable for practitioners and domain experts. + +# A.6 GLOSSARY OF NOTATION + +# Classification (Section 3) + +$\mathcal{X}$ The set of values that the inputs can take on in a standard classification setting. For example, this could be the set of all possible $256 \times 256$ RGB images. +$\mathcal{V}$ The set of values that the labels can take on in a standard classification setting. In this work, we deal primarily with binary classification where $\mathcal{V} = \{0,1\}$ . +$X$ A random variable representing the input in a standard classification setting. +$Y$ A random variable representing the label in a standard classification setting. +$P$ A probability distribution. For example, the joint distribution over inputs and labels can be expressed as $P(X,Y)$ +$x_{i}$ The realization of the $i^{\mathrm{th}}$ sample of $X$ . +$y_{i}$ The realization of the $i^{\mathrm{th}}$ sample of $Y$ . +$n$ The number of samples in a classification dataset. +$\mathbf{x}$ The set of $n$ samples of $X$ , such that $\mathbf{x} = \{x_{i}\}_{i = 1}^{n}\in \mathcal{X}^{n}$ . +y The set of $n$ samples of $Y$ , such that $\mathbf{y} = \{y_i\}_{i=1}^n \in \mathcal{Y}^n$ . +$\mathcal{D}$ A labeled dataset sampled from $P(X,Y)$ , such that $\mathcal{D} = \{(x_i,y_i)\}_{i = 1}^n$ +$h_\theta$ A classifier with parameters $\theta$ that predicts $Y$ from $X$ , where $h_\theta : \mathcal{X} \to \mathcal{Y}$ . +$\hat{Y}$ A random variable representing the prediction of the model, such that $\hat{Y} = h_{\theta}(X)$ . +$\ell$ A performance metric for a standard classification setting, where $\ell : \mathcal{Y} \times \mathcal{Y} \to \mathbb{R}$ (e.g. accuracy). + +# Slice Discovery (Section 3) + +$S^{(j)}$ A random variable representing the $j^{\mathrm{th}}$ data slice. This is a binary variable $S^{(j)}\in \{0,1\}$ +$s_i^{(j)}$ The realization of the $i^{\mathrm{th}}$ sample of $S^{(j)}$ . +$k$ The number of slices in the data. +S A random variable representing a set of $k$ slices $\mathbf{S} = \{S^{(j)}\}_{j = 1}^{k}\in \{0,1\} ^{k}$ +$\mathbf{s}_i$ The realization of the $i^{\mathrm{th}}$ sample of $\mathbf{S}$ +$\psi^{(j)}$ A slicing function $\psi^{(j)}:\mathcal{X}\times \mathcal{Y}\to \{0,1\}$ +$\hat{k}$ The number of slicing functions returned by a slice discovery method. +$\Psi$ A set of $\hat{k}$ slicing functions $\Psi = \{\psi^{(j)}\}_{j=1}^{\hat{k}}$ . +$M$ A slice discovery method, $M(\mathcal{D},h_{\theta})\to \Psi$ + +# Evaluation Framework (Section 4) + +$L$ A slice discovery metric, $L:0,1^{k}\times [0,1]\to \mathbb{R}$ (e.g. precision-at-10). +$C$ A random variable representing a metadata attribute. We use $C$ to generate datasets with underperforming slices. +$\alpha$ The strength of a generated slice. For example, in a correlation slice, $\alpha$ is the Pearson correlation coefficient between the label $Y$ and the correlate $C$ . +$\epsilon$ The performance degradation in metric $\ell$ required for a model to be considered underperforming. +$\bar{h}_{\theta}$ A synthetic model, $\bar{h}:[0,1]^k\times \mathcal{Y}\to [0,1]$ , which samples predictions $\hat{Y}$ conditional on $Y$ and $\mathbf{S}$ . + +# Domino (Section 5) + +The set of possible text strings. +$T$ A random variable representing a text string in a paired data setting, $T\in \mathcal{T}$ +$V$ A random variable representing inputs in a paired data setting, $V \in \mathcal{X}$ . Note that we do not use $X$ here in order to emphasize the difference between the data used to train the classifier and the data used to learn cross-modal embeddings. + +$n_{\mathrm{paired}}$ The number of examples in a paired dataset. + +$\mathcal{D}_{\text{paired}}$ A paired dataset $\mathcal{D} = \{(v_i, t_i)\}_{i=1}^{n_{\text{paired}}}$ , where the text $t_i$ describes the input $v_i$ . + +$d$ The dimensionality of the embeddings. + +$g_{\mathrm{input}}$ An embedding function for inputs, $g_{\mathrm{input}}:\mathcal{X}\to \mathbb{R}^d$ + +$g_{\mathrm{input}}$ An embedding function for text, $g_{\mathrm{text}}:\mathcal{T}\to \mathbb{R}^d$ + +$Z$ A random variable representing the embedding of an input, such that $Z = g_{\mathrm{input}}(X)$ +$z_{i}$ The value of the $i^{\mathrm{th}}$ sample of $Z$ , such that $z_{i} = g_{\mathrm{input}}(x_{i})$ . +The set of $n$ samples of $Z$ , such that $\mathbf{z} = \{z_i\}_{i=1}^n \in \mathbb{R}^{n \times d}$ . + +$Z_{\mathrm{text}}$ A random variable representing the embedding of a text string, such that $Z = g_{\mathrm{text}}(T)$ +The realization of the $i^{\mathrm{th}}$ sample of $Z_{\mathrm{text}}$ , such that $z_{i}^{\mathrm{text}} = g_{\mathrm{text}}(t_{i})$ . +$\bar{z}_{\mathrm{slic}}^{(i)}$ The average embedding of the $i^{\mathrm{th}}$ slice. +$\bar{z}_{\mathrm{class}}^{(i)}$ The average embedding of the $i^{\mathrm{th}}$ class. +$\mu^{(i)}$ In the error-aware mixture model, the mean parameter of the Gaussian distribution used to model $Z$ for the $i^{\mathrm{th}}$ slice. +$\pmb{\Sigma}^{(i)}$ In the error-aware mixture model, the covariance parameter of the Gaussian distribution used to model $Z$ for the $i^{\mathrm{th}}$ slice. +$\mathbf{P}^{(i)}$ In the error-aware mixture model, the parameter of the categorical distribution used to model $Y$ for the $i^{\mathrm{th}}$ slice. +$\hat{\mathbf{p}}^{(i)}$ In the error-aware mixture model, the parameter of the categorical distribution used to model $\hat{Y}$ for the $i^{\mathrm{th}}$ slice. +$\phi$ The parameters of the error aware mixture model: $\phi = [\mathbf{p}_S, \{\mu^{(s)}, \Sigma^{(s)}, \mathbf{p}^{(s)}, \hat{\mathbf{p}}^{(s)}\}_{s=1}^{\bar{k}}]$ \ No newline at end of file diff --git a/dominodiscoveringsystematicerrorswithcrossmodalembeddings/images.zip b/dominodiscoveringsystematicerrorswithcrossmodalembeddings/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..d1126c892ded444ea797a6cb451561ffc57a95b9 --- /dev/null +++ b/dominodiscoveringsystematicerrorswithcrossmodalembeddings/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07d329599a4542465eeba47ad9619917fc0680a43423c57ed6ead4206b2f8b05 +size 669580 diff --git a/dominodiscoveringsystematicerrorswithcrossmodalembeddings/layout.json b/dominodiscoveringsystematicerrorswithcrossmodalembeddings/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c6515647afaf791d7e2e2dc284712518179956d8 --- /dev/null +++ b/dominodiscoveringsystematicerrorswithcrossmodalembeddings/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c18e82e519fbd4a40ff735d0702c2eb995f219070faf45b35e19ba528f2c6e14 +size 1089220 diff --git a/efficientlymodelinglongsequenceswithstructuredstatespaces/1b09a9ab-df28-40ac-abca-c0b1e6c3406b_content_list.json b/efficientlymodelinglongsequenceswithstructuredstatespaces/1b09a9ab-df28-40ac-abca-c0b1e6c3406b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..048f187600ffb8128bc8b488f6141ed19723e2ce --- /dev/null +++ b/efficientlymodelinglongsequenceswithstructuredstatespaces/1b09a9ab-df28-40ac-abca-c0b1e6c3406b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ec3c74520ee73b722f7deb9b8d37e5174fd570ff53e8d577a006cba0b49aae66 +size 197444 diff --git a/efficientlymodelinglongsequenceswithstructuredstatespaces/1b09a9ab-df28-40ac-abca-c0b1e6c3406b_model.json b/efficientlymodelinglongsequenceswithstructuredstatespaces/1b09a9ab-df28-40ac-abca-c0b1e6c3406b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f705cc82ff4ba8385677fbf4b9fc103ac53f4321 --- /dev/null +++ b/efficientlymodelinglongsequenceswithstructuredstatespaces/1b09a9ab-df28-40ac-abca-c0b1e6c3406b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b39dfc2326facb540546b0c9f87249f175adca14627e263ce8065ff6741fdc46 +size 227004 diff --git a/efficientlymodelinglongsequenceswithstructuredstatespaces/1b09a9ab-df28-40ac-abca-c0b1e6c3406b_origin.pdf b/efficientlymodelinglongsequenceswithstructuredstatespaces/1b09a9ab-df28-40ac-abca-c0b1e6c3406b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..36e9961f9b7a47caaae331873e7044a28620b0d9 --- /dev/null +++ b/efficientlymodelinglongsequenceswithstructuredstatespaces/1b09a9ab-df28-40ac-abca-c0b1e6c3406b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:80d392f813adcde6458dfbdcb82a56d3807f872dda0a66913b98bec1d41d58c9 +size 1772357 diff --git a/efficientlymodelinglongsequenceswithstructuredstatespaces/full.md b/efficientlymodelinglongsequenceswithstructuredstatespaces/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e0dcd323571af5ea317b167e6672fb859bc1ddfd --- /dev/null +++ b/efficientlymodelinglongsequenceswithstructuredstatespaces/full.md @@ -0,0 +1,877 @@ +# EFFICIENTLY MODELING LONG SEQUENCES WITH STRUCTURED STATE SPACES + +Albert Gu & Karan Goel & Christopher Ré + +Department of Computer Science, Stanford University + +{albertgu, krng}@stanford.edu, chrismre@cs.stanford.edu + +# ABSTRACT + +A central goal of sequence modeling is designing a single principled model that can address sequence data across a range of modalities and tasks, particularly on long-range dependencies. Although conventional models including RNNs, CNNs, and Transformers have specialized variants for capturing long dependencies, they still struggle to scale to very long sequences of 10000 or more steps. A promising recent approach proposed modeling sequences by simulating the fundamental state space model (SSM) $x'(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t)$ , and showed that for appropriate choices of the state matrix $A$ , this system could handle long-range dependencies mathematically and empirically. However, this method has prohibitive computation and memory requirements, rendering it infeasible as a general sequence modeling solution. We propose the Structured State Space (S4) sequence model based on a new parameterization for the SSM, and show that it can be computed much more efficiently than prior approaches while preserving their theoretical strengths. Our technique involves conditioning $A$ with a low-rank correction, allowing it to be diagonalized stably and reducing the SSM to the well-studied computation of a Cauchy kernel. S4 achieves strong empirical results across a diverse range of established benchmarks, including (i) $91\%$ accuracy on sequential CIFAR-10 with no data augmentation or auxiliary losses, on par with a larger 2-D ResNet, (ii) substantially closing the gap to Transformers on image and language modeling tasks, while performing generation $60\times$ faster (iii) SoTA on every task from the Long Range Arena benchmark, including solving the challenging Path-X task of length 16k that all prior work fails on, while being as efficient as all competitors. $^{1}$ + +# 1 INTRODUCTION + +A central problem in sequence modeling is efficiently handling data that contains long-range dependencies (LRDs). Real-world time-series data often requires reasoning over tens of thousands of time steps, while few sequence models address even thousands of time steps. For instance, results from the long-range arena (LRA) benchmark (Tay et al., 2021) highlight that sequence models today perform poorly on LRD tasks, including one (Path-X) where no model performs better than random guessing. + +Since LRDs are perhaps the foremost challenge for sequence models, all standard model families such as continuous-time models (CTMs), RNNs, CNNs, and Transformers include many specialized variants designed to address them. Modern examples include orthogonal and Lipschitz RNNs (Arjovsky et al., 2016; Erichson et al., 2021) to combat vanishing gradients, dilated convolutions to increase context size (Bai et al., 2018; Oord et al., 2016), and an increasingly vast family of efficient Transformers that reduce the quadratic dependence on sequence length (Katharopoulos et al., 2020; Choromanski et al., 2020). Despite being designed for LRDs, these solutions still perform poorly on challenging benchmarks such as LRA (Tay et al., 2021) or raw audio classification (Gu et al., 2021). + +An alternative approach to LRDs was recently introduced based on the state space model (SSM) (Fig. 1). SSMs are a foundational scientific model used in fields such as control theory, computational neuroscience, and many more, but have not been applicable to deep learning for concrete theoretical reasons. In particular, Gu et al. (2021) showed that deep SSMs actually struggle even on simple tasks, but can perform exceptionally well when equipped with special state matrices $\mathbf{A}$ recently derived + +![](images/ba6a99bdffe9d880e6bef9d8c655986d535ea9906197e693fdc76b1594f43bae.jpg) +Figure 1: (Left) State Space Models (SSM) parameterized by matrices $A, B, C, D$ map an input signal $u(t)$ to output $y(t)$ through a latent state $x(t)$ . (Center) Recent theory on continuous-time memorization derives special $A$ matrices that allow SSMs to capture LRDs mathematically and empirically. (Right) SSMs can be computed either as a recurrence (left) or convolution (right). However, materializing these conceptual views requires utilizing different representations of its parameters (red, blue, green) which are very expensive to compute. S4 introduces a novel parameterization that efficiently swaps between these representations, allowing it to handle a wide range of tasks, be efficient at both training and inference, and excel at long sequences. + +![](images/68c7bdc39b37ae1aff338fb493c68ee26ea20a6c6c252d533d175c67dce40f33.jpg) + +![](images/43782a7ebcdf4be0148dd04c5ff89458daaeaa560a0339bf29c50db61e2dca30.jpg) + +to solve a problem of continuous-time memorization (Voelker et al., 2019; Gu et al., 2020a). Their Linear State Space Layer (LSSL) conceptually unifies the strengths of CTM, RNN and CNN models, and provides a proof of concept that deep SSMs can address LRDs in principle. + +Unfortunately, the LSSL is infeasible to use in practice because of prohibitive computation and memory requirements induced by the state representation. For state dimension $N$ and sequence length $L$ , computing the latent state requires $O(N^2 L)$ operations and $O(N L)$ space - compared to a $\Omega(L + N)$ lower bound for both. Thus for reasonably sized models (e.g. $N = 256$ in Gu et al. (2021)), the LSSL uses orders of magnitude more memory than comparably-sized RNNs or CNNs. Although theoretically efficient algorithms for the LSSL were proposed, we show that these are numerically unstable. In particular, the special $\mathbf{A}$ matrix is highly non-normal in the linear algebraic sense, which prevents the application of conventional algorithmic techniques. Consequently, although the LSSL showed that SSMs have strong performance, they are currently computationally impractical as a general sequence modeling solution. + +In this work, we introduce the Structured State Space (S4) sequence model based on the SSM that solves the critical computational bottleneck in previous work. Technically, S4 reparameterizes the structured state matrices $\mathbf{A}$ appearing in Voelker et al. (2019); Gu et al. (2020a) by decomposing them as the sum of a low-rank and skew-symmetric term. Additionally, instead of expanding the standard SSM in coefficient space, we compute its truncated generating function in frequency space, which can be simplified into a multipole-like evaluation. Combining these two ideas, we show that the low-rank term can be corrected by the Woodbury identity while the skew-symmetric term can be diagonalized stably, ultimately reducing to a well-studied and theoretically stable Cauchy kernel (Pan, 2001; 2017). This results in $\tilde{O}(N + L)$ computation and $O(N + L)$ memory usage, which is essentially tight for sequence models. Compared to the LSSL, S4 is up to $30\times$ faster with $400\times$ less memory usage, while exceeding the LSSL's performance empirically. + +Empirically, S4 significantly advances the state-of-the-art for LRD. On the LRA benchmark for efficient sequence models, S4 is as fast as all baselines while outperforming them by $20+$ points on average. S4 is the first model to solve the difficult LRA Path-X task (length-16384), achieving $88\%$ accuracy compared to $50\%$ random guessing for all prior work. On speech classification with length-16000 sequences, S4 halves the test error $(1.7\%)$ of specialized Speech CNNs – by contrast, all RNN and Transformer baselines fail to learn $(\geq 70\%$ error). + +Towards a general-purpose sequence model. Beyond LRD, a broad goal of machine learning is to develop a single model that can be used across a wide range of problems. Models today are typically specialized to solve problems from a particular domain (e.g. images, audio, text, time-series), and enable a narrow range of capabilities (e.g. efficient training, fast generation, handling irregularly sampled data). This specialization is typically expressed via domain-specific preprocessing, inductive biases, and architectures. Sequence models provide a general framework for solving many of these problems with reduced specialization – e.g. Vision Transformers for image classification with less + +2D information (Dosovitskiy et al., 2020). However, most models such as Transformers generally still require substantial specialization per task to achieve high performance. + +Deep SSMs in particular have conceptual strengths that suggest they may be promising as a general sequence modeling solution. These strengths include a principled approach to handling LRDs, as well as the ability to move between continuous-time, convolutional, and recurrent model representations, each with distinct capabilities (Fig. 1). Our technical contributions enable SSMs to be applied successfully to a varied set of benchmarks with minimal modification: + +- Large-scale generative modeling. On CIFAR-10 density estimation, S4 is competitive with the best autoregressive models (2.85 bits per dim). On WikiText-103 language modeling, S4 substantially closes the gap to Transformers (within 0.8 perplexity), setting SoTA for attention-free models. +- Fast autoregressive generation. Like RNNs, S4 can use its latent state to perform $60 \times$ faster pixel-token generation than standard autoregressive models on CIFAR-10 and WikiText-103. +- Sampling resolution change. Like specialized CTMs, S4 can adapt to changes in time-series sampling frequency without retraining, e.g. at $0.5 \times$ frequency on speech classification. +- Learning with weaker inductive biases. With no architectural changes, S4 surpasses Speech CNNs on speech classification, outperforms the specialized Informer model on time-series forecasting problems, and matches a 2-D ResNet on sequential CIFAR with over $90\%$ accuracy. + +# 2 BACKGROUND: STATE SPACES + +Sections 2.1 to 2.4 describe the four properties of SSMs in Fig. 1: the classic continuous-time representation, addressing LRDs with the HiPPO framework, the discrete-time recurrent representation, and the parallelizable convolution representation. In particular, Section 2.4 introduces the SSM convolution kernel $\overline{K}$ , which is the focus of our theoretical contributions in Section 3. + +# 2.1 STATE SPACE MODELS: A CONTINUOUS-TIME LATENT STATE MODEL + +The state space model is defined by the simple equation (1). It maps a 1-D input signal $u(t)$ to an $N$ -D latent state $x(t)$ before projecting to a 1-D output signal $y(t)$ . + +$$ +\begin{array}{l} x ^ {\prime} (t) = \boldsymbol {A} x (t) + \boldsymbol {B} u (t) \\ y (t) = C x (t) + D u (t) \\ \end{array} +$$ + +SSMs are broadly used in many scientific disciplines and related to latent state models such as Hidden Markov Models (HMM). Our goal is to simply use the SSM as a black-box representation in a deep sequence model, where $A, B, C, D$ are parameters learned by gradient descent. For the remainder of this paper, we will omit the parameter $D$ for exposition (or equivalently, assume $D = 0$ ) because the term $Du$ can be viewed as a skip connection and is easy to compute. + +# 2.2 ADDRESSING LONG-RANGE DEPENDENCIES WITH HIPPO + +Prior work found that the basic SSM (1) actually performs very poorly in practice. Intuitively, one explanation is that linear first-order ODEs solve to an exponential function, and thus may suffer from gradients scaling exponentially in the sequence length (i.e., the vanishing/exploding gradients problem (Pascanu et al., 2013)). To address this problem, the LSSL leveraged the HiPPO theory of continuous-time memorization (Gu et al., 2020a). HiPPO specifies a class of certain matrices $\mathbf{A} \in \mathbb{R}^{N \times N}$ that when incorporated into (1), allows the state $x(t)$ to memorize the history of the input $u(t)$ . The most important matrix in this class is defined by equation (2), which we will call the HiPPO matrix. For example, the LSSL found that simply modifying an SSM from a random matrix $\mathbf{A}$ to equation (2) improved its performance on the sequential MNIST benchmark from 60% to 98%. + +$$ +\left(\mathbf {H i P P O M a t r i x}\right) \quad A _ {n k} = - \left\{ \begin{array}{l l} (2 n + 1) ^ {1 / 2} (2 k + 1) ^ {1 / 2} & \text {i f} n > k \\ n + 1 & \text {i f} n = k. \\ 0 & \text {i f} n < k \end{array} \right. \tag {2} +$$ + +# 2.3 DISCRETE-TIME SSM: THE RECURRENT REPRESENTATION + +To be applied on a discrete input sequence $(u_0, u_1, \ldots)$ instead of continuous function $u(t)$ , (1) must be discretized by a step size $\Delta$ that represents the resolution of the input. Conceptually, the inputs $u_k$ can be viewed as sampling an implicit underlying continuous signal $u(t)$ , where $u_k = u(k\Delta)$ . + +To discretize the continuous-time SSM, we follow prior work in using the bilinear method (Tustin, 1947), which converts the state matrix $\mathbf{A}$ into an approximation $\overline{\mathbf{A}}$ . The discrete SSM is + +$$ +x _ {k} = \bar {\boldsymbol {A}} x _ {k - 1} + \bar {\boldsymbol {B}} u _ {k} \quad \bar {\boldsymbol {A}} = (\boldsymbol {I} - \Delta / 2 \cdot \boldsymbol {A}) ^ {- 1} (\boldsymbol {I} + \Delta / 2 \cdot \boldsymbol {A}) \tag {3} +$$ + +$$ +y _ {k} = \overline {{\boldsymbol {C}}} x _ {k} \quad \overline {{\boldsymbol {B}}} = (\boldsymbol {I} - \Delta / 2 \cdot \boldsymbol {A}) ^ {- 1} \Delta \boldsymbol {B} \quad \overline {{\boldsymbol {C}}} = \boldsymbol {C}. +$$ + +Equation (3) is now a sequence-to-sequence map $u_{k} \mapsto y_{k}$ instead of function-to-function. Moreover the state equation is now a recurrence in $x_{k}$ , allowing the discrete SSM to be computed like an RNN. Concretely, $x_{k} \in \mathbb{R}^{N}$ can be viewed as a hidden state with transition matrix $\overline{A}$ . + +Notationally, throughout this paper we use $\overline{A},\overline{B},\ldots$ to denote discretized SSM matrices defined by (3). Note that these matrices are a function of both $A$ as well as a step size $\Delta$ ; we suppress this dependence for notational convenience when it is clear. + +# 2.4 TRAINING SSMs: THE CONVOLUTIONAL REPRESENTATION + +The recurrent SSM (3) is not practical for training on modern hardware due to its sequentiality. Instead, there is a well-known connection between linear time-invariant (LTI) SSMs such as (1) and continuous convolutions. Correspondingly, (3) can actually be written as a discrete convolution. + +For simplicity let the initial state be $x_{-1} = 0$ . Then unrolling (3) explicitly yields + +$$ +x _ {0} = \overline {{\boldsymbol {B}}} u _ {0} \qquad x _ {1} = \overline {{\boldsymbol {A B}}} u _ {0} + \overline {{\boldsymbol {B}}} u _ {1} \qquad x _ {2} = \overline {{\boldsymbol {A}}} ^ {2} \overline {{\boldsymbol {B}}} u _ {0} + \overline {{\boldsymbol {A B}}} u _ {1} + \overline {{\boldsymbol {B}}} u _ {2} \qquad \dots +$$ + +$$ +y _ {0} = \overline {{C B}} u _ {0} \quad y _ {1} = \overline {{C A B}} u _ {0} + \overline {{C B}} u _ {1} \quad y _ {2} = \overline {{C A}} ^ {2} \overline {{B}} u _ {0} + \overline {{C A B}} u _ {1} + \overline {{C B}} u _ {2} \quad \dots . +$$ + +This can be vectorized into a convolution (4) with an explicit formula for the convolution kernel (5). + +$$ +y _ {k} = \bar {\boldsymbol {C A}} ^ {k} \bar {\boldsymbol {B}} u _ {0} + \bar {\boldsymbol {C A}} ^ {k - 1} \bar {\boldsymbol {B}} u _ {1} + \dots + \bar {\boldsymbol {C A B}} u _ {k - 1} + \bar {\boldsymbol {C B}} u _ {k} \tag {4} +$$ + +$$ +y = \overline {{\boldsymbol {K}}} * u. +$$ + +$$ +\overline {{\boldsymbol {K}}} \in \mathbb {R} ^ {L} := \mathcal {K} _ {L} (\overline {{\boldsymbol {A}}}, \overline {{\boldsymbol {B}}}, \overline {{\boldsymbol {C}}}) := \left(\overline {{\boldsymbol {C A} ^ {i} \boldsymbol {B}}}\right) _ {i \in [ L ]} = \left(\overline {{\boldsymbol {C B}}}, \overline {{\boldsymbol {C A B}}}, \dots , \overline {{\boldsymbol {C A}}} ^ {L - 1} \overline {{\boldsymbol {B}}}\right). \tag {5} +$$ + +In other words, equation (4) is a single (non-circular) convolution and can be computed very efficiently with FFTs, provided that $\overline{K}$ is known. However, computing $\overline{K}$ in (5) is non-trivial and is the focus of our technical contributions in Section 3. We call $\overline{K}$ the SSM convolution kernel or filter. + +# 3 METHOD:STRUCTUREDSTATESPACES(S4) + +Our technical results focus on developing the S4 parameterization and showing how to efficiently compute all views of the SSM (Section 2): the continuous representation $(A, B, \overline{C})$ (1), the recurrent representation $(\overline{A}, \overline{B}, \overline{C})$ (3), and the convolutional representation $\overline{K}$ (4). + +Section 3.1 motivates our approach, which is based on the linear algebraic concepts of conjugation and diagonalization, and discusses why the naive application of this approach does not work. Section 3.2 gives an overview of the key technical components of our approach and formally defines the S4 parameterization. Section 3.3 sketches the main results, showing that S4 is asymptotically efficient (up to log factors) for sequence models. Proofs are in Appendices B and C. + +# 3.1 MOTIVATION:DIAGONALIZATION + +The fundamental bottleneck in computing the discrete-time SSM (3) is that it involves repeated matrix multiplication by $\overline{A}$ . For example, computing (5) naively as in the LSSL involves $L$ successive multiplications by $\overline{A}$ , requiring $O(N^2 L)$ operations and $O(NL)$ space. + +To overcome this bottleneck, we use a structural result that allows us to simplify SSMs. + +Algorithm 1 S4 CONVOLUTION KERNEL (SKETCH) +Input: S4 parameters $\Lambda, P, Q, B, C \in \mathbb{C}^N$ and step size $\Delta$ +Output: SSM convolution kernel $\overline{\boldsymbol{K}} = \mathcal{K}_L(\overline{\boldsymbol{A}}, \overline{\boldsymbol{B}}, \overline{\boldsymbol{C}})$ for $\boldsymbol{A} = \boldsymbol{\Lambda} - \boldsymbol{P}\boldsymbol{Q}^*$ (equation (5)) +1: $\widetilde{\boldsymbol{C}} \gets (\boldsymbol{I} - \overline{\boldsymbol{A}}^L)^* \overline{\boldsymbol{C}}$ +2: $\begin{bmatrix} k_{00}(\omega) & k_{01}(\omega) \\ k_{10}(\omega) & k_{11}(\omega) \end{bmatrix} \gets \begin{bmatrix} \widetilde{\boldsymbol{C}} \boldsymbol{Q} \end{bmatrix}^* \left( \frac{2}{\Delta} \frac{1 - \omega}{1 + \omega} - \boldsymbol{\Lambda} \right)^{-1} [\boldsymbol{B} \boldsymbol{P}]$ +3: $\hat{\boldsymbol{K}}(\omega) \gets \frac{2}{1 + \omega} [k_{00}(\omega) - k_{01}(\omega)(1 + k_{11}(\omega))^{-1} k_{10}(\omega)]$ +4: $\hat{\boldsymbol{K}} = \{\hat{\boldsymbol{K}}(\omega) : \omega = \exp(2\pi i \frac{k}{L})\}$ +5: $\overline{\boldsymbol{K}} \gets iFFT(\hat{\boldsymbol{K}})$ + +Lemma 3.1. Conjugation is an equivalence relation on SSMs $(A,B,C)\sim (V^{-1}AV,V^{-1}B,CV)$ + +Proof. Write out the two SSMs with state denoted by $x$ and $\tilde{x}$ respectively: + +$$ +x ^ {\prime} = \boldsymbol {A} x + \boldsymbol {B} u \quad \tilde {x} ^ {\prime} = \boldsymbol {V} ^ {- 1} \boldsymbol {A} \boldsymbol {V} \tilde {x} + \boldsymbol {V} ^ {- 1} \boldsymbol {B} u +$$ + +$$ +y = \boldsymbol {C} x \quad y = \boldsymbol {C V} \tilde {x} +$$ + +After multiplying the right side SSM by $V$ , the two SSMs become identical with $x = V\tilde{x}$ . Therefore these compute the exact same operator $u \mapsto y$ , but with a change of basis by $V$ in the state $x$ . + +Lemma 3.1 motivates putting $\mathbf{A}$ into a canonical form by conjugation $^2$ , which is ideally more structured and allows faster computation. For example, if $\mathbf{A}$ were diagonal, the resulting computations become much more tractable. In particular, the desired $\overline{\mathbf{K}}$ (equation (4)) would be a Vandermonde product which theoretically only needs $O((N + L)\log^2 (N + L))$ arithmetic operations (Pan, 2001). + +Unfortunately, the naive application of diagonalization does not work due to numerical issues. First, Vandermonde multiplication is itself a famously ill-conditioned problem (Pan, 2016). Furthermore, we derive the explicit diagonalization for the HiPPO matrix (2) and show it has entries exponentially large in the state size $N$ , rendering the diagonalization numerically infeasible (e.g. $CV$ in Lemma 3.1 would not be computable). We note that Gu et al. (2021) proposed a different (unimplemented) algorithm to compute $\overline{K}$ faster than the naive algorithm. In Appendix B, we prove that it is also numerically unstable for related reasons. + +Lemma 3.2. The HiPPO matrix $\mathbf{A}$ in equation (2) is diagonalized by the matrix $V_{ij} = \binom{i+j}{i-j}$ . In particular, $V_{3i,i} = \binom{4i}{2i} \approx 2^{4i}$ . Therefore $\mathbf{V}$ has entries of magnitude up to $2^{4N/3}$ . + +# 3.2 THE S4 PARAMETERIZATION: NORMAL PLUS LOW-RANK + +The previous discussion implies that we should only conjugate by well-conditioned matrices $\mathbf{V}$ . The ideal scenario is when the matrix $\mathbf{A}$ is diagonalizable by a perfectly conditioned (i.e., unitary) matrix. By the Spectral Theorem of linear algebra, this is exactly the class of normal matrices. However, this class of matrices is restrictive; in particular, it does not contain the HiPPO matrix (2). + +We make the observation that although the HiPPO matrix is not normal, it can be decomposed as the sum of a normal and low-rank matrix. However, this is still not useful by itself: unlike a diagonal matrix, powering up this sum (in (5)) is still slow and not easily optimized. We overcome this bottleneck by simultaneously applying three new techniques. + +- Instead of computing $\overline{K}$ directly, we compute its spectrum by evaluating its truncated generating function $\sum_{j=0}^{L-1} \overline{K}_j \zeta^j$ at the roots of unity $\zeta$ . $\overline{K}$ can then be found by applying an inverse FFT. +- This generating function is closely related to the matrix resolvent, and now involves a matrix inverse instead of power. The low-rank term can now be corrected by applying the Woodbury identity which reduces $(A + PQ^{*})^{-1}$ in terms of $A^{-1}$ , truly reducing to the diagonal case. +- Finally, we show that the diagonal matrix case is equivalent to the computation of a Cauchy kernel $\frac{1}{\omega_j - \zeta_k}$ , a well-studied problem with stable near-linear algorithms (Pan, 2015; 2017). + +Our techniques apply to any matrix that can be decomposed as Normal Plus Low-Rank (NPLR). + +Theorem 1. All HiPPO matrices from (Gu et al., 2020a) have a NPLR representation + +$$ +\boldsymbol {A} = \boldsymbol {V} \boldsymbol {\Lambda} \boldsymbol {V} ^ {*} - \boldsymbol {P} \boldsymbol {Q} ^ {\top} = \boldsymbol {V} (\boldsymbol {\Lambda} - (\boldsymbol {V} ^ {*} \boldsymbol {P}) (\boldsymbol {V} ^ {*} \boldsymbol {Q}) ^ {*}) \boldsymbol {V} ^ {*} \tag {6} +$$ + +for unitary $\mathbf{V} \in \mathbb{C}^{N \times N}$ , diagonal $\Lambda$ , and low-rank factorization $\mathbf{P}, \mathbf{Q} \in \mathbb{R}^{N \times r}$ . These matrices HiPPO-LegS, LegT, LagT all satisfy $r = 1$ or $r = 2$ . In particular, equation (2) is NPLR with $r = 1$ . + +# 3.3 S4 ALGORITHMS AND COMPUTATIONAL COMPLEXITY + +By equation (6), note that NPLR matrices can be conjugated into diagonal plus low-rank (DPLR) form (now over $\mathbb{C}$ instead of $\mathbb{R}$ ). Theorems 2 and 3 describe the complexities of SSMs where $A$ is in DPLR form. S4 is optimal or near-optimal for both recurrent and convolutional representations. + +Theorem 2 (S4 Recurrence). Given any step size $\Delta$ , computing one step of the recurrence (3) can be done in $O(N)$ operations where $N$ is the state size. + +Theorem 2 follows from the fact that the inverse of a DPLR matrix is also DPLR (e.g. also by the Woodbury identity). This implies that the discretized matrix $\overline{A}$ is the product of two DPLR matrices and thus has $O(N)$ matrix-vector multiplication. Appendix C.2 computes $\overline{A}$ in closed DPLR form. + +Theorem 3 (S4 Convolution). Given any step size $\Delta$ , computing the SSM convolution filter $\overline{K}$ can be reduced to 4 Cauchy multiplies, requiring only $\widetilde{O}(N + L)$ operations and $O(N + L)$ space. + +Appendix C, Definition 3 formally defines Cauchy matrices, which are related to rational interpolation problems. Computing with Cauchy matrices is an extremely well-studied problem in numerical analysis, with both fast arithmetic and numerical algorithms based on the famous Fast Multipole Method (FMM) (Pan, 2001; 2015; 2017). The computational complexities of these algorithms under various settings are described in Appendix C, Proposition 5. + +We reiterate that Theorem 3 is our core technical contribution, and its algorithm is the very motivation of the NPLR S4 parameterization. This algorithm is formally sketched in Algorithm 1. + +# 3.4 ARCHITECTURE DETAILS OF THE DEEP S4 LAYER + +Concretely, an S4 layer is parameterized as follows. First initialize a SSM with $\mathbf{A}$ set to the HiPPO matrix (2). By Lemma 3.1 and Theorem 1, this SSM is unitarily equivalent to some $(\Lambda - PQ^{*}, B, C)$ for some diagonal $\Lambda$ and vectors $P, Q, B, C \in \mathbb{C}^{N \times 1}$ . These comprise S4's $5N$ trainable parameters. + +The overall deep neural network (DNN) architecture of S4 is similar to prior work. As defined above, S4 defines a map from $\mathbb{R}^L\to \mathbb{R}^L$ , i.e. a 1-D sequence map. Typically, DNNs operate on feature maps of size $H$ instead of 1. S4 handles multiple features by simply defining $H$ independent copies of itself, and then mixing the $H$ features with a position-wise linear layer for a total of $O(H^{2}) + O(HN)$ parameters per layer. Nonlinear activation functions are also inserted between these layers. Overall, S4 defines a sequence-to-sequence map of shape (batch size, sequence length, hidden dimension), exactly the same as related sequence models such as Transformers, RNNs, and CNNs. + +# 4 EXPERIMENTS + +Section 4.1 benchmarks S4 against the LSSL and efficient Transformer models. Section 4.2 validates S4 on LRDs: the LRA benchmark and raw speech classification. Section 4.3 investigates whether S4 can be used as a general sequence model to perform effectively and efficiently in a wide variety of settings including image classification, image and text generation, and time series forecasting. + +# 4.1 S4 EFFICIENCY BENCHMARKS + +We benchmark that S4 can be trained quickly and efficiently, both compared to the LSSL, as well as efficient Transformer variants designed for long-range sequence modeling. As outlined in Section 3, S4 is theoretically much more efficient than the LSSL, and Table 1 confirms that the S4 is orders of magnitude more speed- and memory-efficient for practical layer sizes. In fact, S4's speed and memory use is competitive with the most efficient Transformer variants benchmarked by Tay et al. (2021)—Linear Transformer (Katharopoulos et al., 2020) and Performer (Choromanski et al., 2020)—in a parameter-matched setting (Table 2, following the protocol of Tay et al. (2021)). + +Table 1: Deep SSMs: The S4 parameterization with Algorithm 1 is asymptotically more efficient than the LSSL. + +
Dim.TRAINING STEP (MS)MEMORY ALLOC. (MB)
128256512128256512
LSSL9.3220.6140.7222.1168513140
S44.773.074.755.312.633.5
Ratio1.9×6.7×29.6×42.0×133×392×
+ +Table 2: Benchmarks vs. efficient Transformers + +
LENGTH 1024LENGTH 4096
SpeedMem.SpeedMem.
Transformer
Performer1.23×0.43×3.79×0.086×
Linear Trans.1.58×0.37×5.35×0.067×
S41.58×0.43×5.19×0.091×
+ +![](images/6f370a916b626af5b748792fa3cedea7f3e617daac82c01c85e406b345334887.jpg) +Figure 2: Visualizations of a trained S4 model on LRA Path-X. SSM convolution kernels $\overline{\mathbf{K}} \in \mathbb{R}^{16384}$ are reshaped into a $128 \times 128$ image. (Left) Example from the Path-X task, which involves deducing if the markers are connected by a path (Top) Filters from the first layer (Bottom) Filters from the last layer. + +![](images/65ba4543eb6eb9f90f34c1e30596ec7f6f074b0e53745ac670359296991f4e10.jpg) + +![](images/681be60cda63ff4b9f4c9f78f054723581fbc9ce86475faaa429d183ce9cfb9d.jpg) + +Table 3: (Long Range Arena) Accuracy on full suite of LRA tasks. (Top) Original Transformer variants in LRA. Full results in Appendix D.2. (Bottom) Other models reported in the literature. + +
MODELLISTOPSTEXTRETRIEVALIMAGEPATHFINDERPATH-XAVG
Transformer36.3764.2757.4642.4471.4053.66
Reformer37.2756.1053.4038.0768.5050.56
BigBird36.0564.0259.2940.8374.8754.17
Linear Trans.16.1365.9053.0942.3475.3050.46
Performer18.0165.4053.8242.7777.0551.18
FNet35.3365.1159.6138.6777.8054.42
Nyströmformer37.1565.5279.5641.5870.9457.46
Luna-25637.2564.5779.2947.3877.7259.37
S458.3576.0287.0987.2686.0588.1080.48
+ +# 4.2 LEARNING LONG RANGE DEPENDENCIES + +As described in Sections 2.2 and 3.1, S4 uses a principled approach to address LRDs based on the HiPPO theory of continuous-time memorization. Our goal in this section is to validate that S4 achieves high performance on difficult tasks that require long-range reasoning. We focus here on two problems: (i) the Long-Range Arena, a well-known benchmark designed to test efficient sequence models on LRDs, and (ii) a speech classification problem as a real-world test of LRDs. + +Long Range Arena (LRA). LRA (Tay et al., 2021) contains 6 tasks with lengths 1K-16K steps, encompassing modalities and objectives that require similarity, structural, and visuospatial reasoning. Table 3 compares S4 against the 11 Transformer variants from Tay et al. (2021) as well as follow-up work. S4 substantially advances the SoTA, outperforming all baselines on all tasks and averaging $80.48\%$ compared to less than $60\%$ for every baseline. Notably, S4 solves the Path-X task, an extremely challenging task that involves reasoning about LRDs over sequences of length $128 \times 128 = 16384$ . All previous models have failed (i.e. random guessing) due to memory or computation bottlenecks, or simply being unable to learn such long dependencies. + +We analyze S4's performance on Path-X by visualizing its learned representations, in particular 1-D convolution kernels $\overline{K}$ which are the focus of our technical results in Section 3. Fig. 2 shows that S4 + +learns a variety of filters that display spatially consistent structure and demonstrate awareness of the 2-D nature of the data. In particular, the lower layers learn simple kernels that extract features from just a few rows of local context while ignoring the rest of the image. On the other hand, higher layers aggregate information globally across full columns of the image at varying spatial frequencies. Filters in these higher layers span the entire context (16384 pixels), confirming S4's ability to learn LRDs. + +Raw Speech Classification. Speech is a typical real-world time series domain, involving signals sampled from an underlying physical process at high frequency. We perform speech classification using the Speech Commands dataset (Warden, 2018). While most sequence models for speech rely on extensive preprocessing (e.g. to MFCC features), we classify raw speech (length-16000) following Romero et al. (2021). S4 achieves $98.3\%$ accuracy, higher than all baselines that use the $100\times$ shorter MFCC features, and validates that a powerful LRD model is able to extract more information from the raw data and outperform hand-crafted pre-processing. Additionally, we include a baseline CNN specifically designed for raw speech, the discriminator from the WaveGAN model (Donahue et al., 2019), which performs worse than S4 while having $90\times$ more parameters and incorporating many more architectural heuristics (Appendix D.2). + +# 4.3 S4 AS A GENERAL SEQUENCE MODEL + +A key goal of sequence modeling research is to develop a single model that can be applied in many domains (e.g. images, audio, text, time-series) with a broad range of capabilities (e.g. efficient training, fast generation, handling irregularly sampled data). As a fundamental scientific model, SSMs are a promising candidate that come with a range of capabilities, and S4's strong results on LRD benchmarks spanning images, text, and speech are evidence of S4's potential as a general sequence model. In this section, we focus on understanding this question in more depth by highlighting key strengths of S4 in settings that usually require specialized models. The tasks we focus on (generative modeling, image classification, time-series forecasting) are considered as LRD tasks in the literature, and serve as additional validation that S4 handles LRDs efficiently. + +Large-scale generative modeling. We investigate two well-studied image and text benchmarks to validate the scalability, flexibility, and efficiency of S4. These tasks require much larger models than our previous tasks – up to 250M parameters. + +First, CIFAR density estimation is a popular benchmark for autoregressive models, where images are flattened into a sequence of 3072 RGB subpixels that are predicted one by one. Table 6 shows that with no 2D inductive bias, S4 is competitive with the best models designed for this task. + +Second, WikiText-103 is an established benchmark for language modeling, an important task for large-scale sequence models where tokens are predicted sequentially based on past context. Although RNNs were the model of choice for many years, Transformers are now the dominant model in such applications that contain data that is inherently discrete. We show that alternative models + +Table 4: (Speech classification) Transformer, CTM, RNN, CNN, and SSM models. (MFCC) Standard pre-processed MFCC features (length-161). (Raw) Unprocessed signals (length-16000). $(0.5\times)$ Frequency change at test time. $\times$ denotes not applicable or computationally infeasible on single GPU. + +
MFCCRAW0.5×
Transformer90.75××
Performer80.8530.7730.68
ODE-RNN65.9××
NRDE89.816.4915.12
ExpRNN82.1311.610.8
LipschitzRNN88.38××
CKConv95.371.6665.96
WaveGAN-D×96.25×
LSSL93.58××
S493.9698.3296.30
+ +Table 5: (Pixel-level 1-D image classification) Transformer, RNN, CNN, and SSM models. Extended results + citations in Appendix D. + +
sMNISTPMNISTSCIFAR
Transformer98.997.962.2
LSTM98.995.1163.01
r-LSTM98.495.272.2
UR-LSTM99.2896.9671.00
UR-GRU99.2796.5174.4
HiPPO-RNN98.998.361.1
LMU-FFT-98.49-
LipschitzRNN99.496.364.2
TCN99.097.2-
TrellisNet99.2098.1373.42
CKConv99.3298.5463.74
LSSL99.5398.7684.65
S499.6398.7091.13
+ +Table 6: (CIFAR-10 density estimation) As a generic Table 7: (WikiText-103 language modeling) S4 ap sequence model, S4 is competitive with previous autore-proaches the performance of Transformers with much greissive models (in bits per dim.) while incorporating no faster generation. (Top) Transformer baseline which 2D inductive bias, and has fast generation through its recur-our implementation is based on, with attention re- rence mode. replaced by S4. (Bottom) Attention-free models (RNNs + +
Modelbpd2D biasImages / sec
Transformer3.47None0.32 (1×)
Linear Transf.3.40None17.85 (56×)
PixelCNN3.142D conv.-
Row PixelRNN3.002D BiLSTM-
PixelCNN++2.922D conv.19.19 (59.97×)
Image Transf.2.902D local attn.0.54 (1.7×)
PixelSNAIL2.852D conv. + attn.0.13 (0.4×)
Sparse Transf.2.802D sparse attn.-
S4 (base)2.92None20.84 (65.1×)
S4 (large)2.85None3.36 (10.5×)
+ +
ModelParamsTest ppl.Tokens / sec
Transformer247M20.510.8K (1×)
GLU CNN229M37.2-
AWD-QRNN151M33.0-
LSTM + Hebb.-29.2-
TrellisNet180M29.19-
Dynamic Conv.255M25.0-
TaLK Conv.240M23.3-
S4249M21.2848K (60×)
+ +to Transformers can still be competitive in these settings. By simply taking a strong Transformer baseline (Baevski & Auli, 2018) and replacing the self-attention layers, S4 substantially closes the gap to Transformers (within $0.8\mathrm{ppl}$ ), setting SoTA for attention-free models by over $2\mathrm{ppl}$ . + +Fast autoregressive inference. A prominent limitation of autoregressive models is inference speed (e.g. generation), since they require a pass over the full context for every new sample. Several methods have been specifically crafted to overcome this limitation such as the Linear Transformer, a hybrid Transformer/RNN that switches to a stateful, recurrent view at inference time for speed. + +As a stateful model, SSMs automatically have this ability (Fig. 1). By switching to its recurrent representation (Section 2.3), S4 requires constant memory and computation per time step – in contrast to standard autoregressive models which scale in the context length. On both CIFAR-10 and WikiText-103, we report the throughput of various models at generation time, with S4 around $60 \times$ faster than a vanilla Transformer on both tasks (details in Appendix D.3.3). + +Sampling resolution change. As a continuous-time model, S4 automatically adapts to data sampled at different rates, a challenging setting for time series with a dedicated line of work (Rubanova et al., 2019; De Brouwer et al., 2019; Romero et al., 2021). Without re-training, S4 achieves $96.3\%$ accuracy at $0.5 \times$ the frequency on Speech Commands (Table 4), simply by changing its internal step size $\Delta$ (Section 2.3). + +Learning with weaker inductive bias. Beyond our results on speech (Section 4.2), we further validate that S4 can be applied with minimal modifications on two domains that typically require specialized domain-specific preprocessing and architectures. First, we compare S4 to the Informer (Zhou et al., 2021), a new Transformer architecture that uses a complex encoder-decoder designed for time-series forecasting problems. A simple application of S4 that treats forecasting as a masked sequence-to-sequence transformation (Fig. 3) outperforms the Informer and other baselines on $40/50$ settings across 5 forecasting tasks. Notably, S4 is better on the longest setting in each task, e.g. reducing MSE by $37\%$ when forecasting 30 days of weather data (Appendix D.3.5). + +Finally, we evaluate S4 on pixel-level sequential image classification tasks (Table 5), popular benchmarks which were originally LRD tests for RNNs (Arjovsky et al., 2016). Beyond LRDs, these benchmarks point to a recent effort of the ML community to solve vision problems with reduced domain knowledge, in the spirit of models such as Vision Transformers (Dosovitskiy et al., 2020) and MLP-Mixer (Tolstikhin et al., 2021). Sequential CIFAR is a particularly challenging dataset where outside of SSMs, all sequence models have a gap of over $25\%$ to a simple 2-D CNN. By contrast, S4 is competitive with a larger ResNet18 (7.9M vs. 11.0M parameters), both with $(93.16\%)$ vs. $95.62\%$ or without $(91.12\%)$ vs. $89.46\%$ data augmentation. Moreover, it is much more robust to other architectural choices (e.g. $90.46\%$ vs. $79.52\%$ when swapping BatchNorm for LayerNorm). + +# 5 CONCLUSION + +We introduce S4, a sequence model that uses a new parameterization for the state space model's continuous-time, recurrent, and convolutional views to efficiently model LRDs in a principled manner. Results across established benchmarks evaluating a diverse range of data modalities and model capabilities suggest that S4 has the potential to be an effective general sequence modeling solution. + +# REFERENCES + +Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. In *The International Conference on Machine Learning (ICML)*, pp. 1120-1128, 2016. +Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853, 2018. +Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018. +Shaojie Bai, J Zico Kolter, and Vladlen Koltun. Trellis networks for sequence modeling. In The International Conference on Learning Representations (ICLR), 2019. +Shiyu Chang, Yang Zhang, Wei Han, Mo Yu, Xiaoxiao Guo, Wei Tan, Xiaodong Cui, Michael Witbrock, Mark Hasegawa-Johnson, and Thomas S Huang. Dilated recurrent neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2017. +Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019. +Narsimha Chilkuri and Chris Eliasmith. Parallelizing legendre memory unit training. The International Conference on Machine Learning (ICML), 2021. +Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. In The International Conference on Learning Representations (ICLR), 2020. +Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In International conference on machine learning, pp. 933-941. PMLR, 2017. +Edward De Brouwer, Jaak Simm, Adam Arany, and Yves Moreau. Gru-ode-bayes: Continuous modeling of sporadically-observed time series. In Advances in Neural Information Processing Systems (NeurIPS), 2019. +Chris Donahue, Julian McAuley, and Miller Puckette. Adversarial audio synthesis. In ICLR, 2019. +Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. +N Benjamin Erichson, Omri Azencot, Alejandro Queiruga, Liam Hodgkinson, and Michael W Mahoney. Lipschitz recurrent neural networks. In International Conference on Learning Representations, 2021. +Gene H Golub and Charles F Van Loan. Matrix computations, volume 3. JHU press, 2013. +Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, and Christopher Ré. Hippo: Recurrent memory with optimal polynomial projections. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, MariaFlorina Balcan, and Hsuan-Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020a. URL https://proceedings.neurips.cc/paper/2020/ hash/102f0bb6efb3a6128a3c750dd16729be-Abstract.html. +Albert Gu, Caglar Gulcehre, Tom Le Paine, Matt Hoffman, and Razvan Pascanu. Improving the gating mechanism of recurrent neural networks. In The International Conference on Machine Learning (ICML), 2020b. +Albert Gu, Isys Johnson, Karan Goel, Khaled Saab, Tri Dao, Atri Rudra, and Christopher Ré. Combining recurrent, convolutional, and continuous-time models with the structured learnable linear state space layer. In Advances in Neural Information Processing Systems (NeurIPS), 2021. +Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, 9(8): 1735-1780, 1997. + +Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In International Conference on Machine Learning, pp. 5156-5165. PMLR, 2020. +Mario Lezcano-Casado and David Martínez-Rubio. Cheap orthogonal constraints in neural networks: A simple parametrization of the orthogonal and unitary group. In The International Conference on Machine Learning (ICML), 2019. +Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, and Yanbo Gao. Independently recurrent neural network (IndRNN): Building a longer and deeper RNN. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5457-5466, 2018. +Vasileios Lioutas and Yuhong Guo. Time-aware large kernel convolutions. In International Conference on Machine Learning, pp. 6172-6183. PMLR, 2020. +Stephen Merity, Nitish Shirish Keskar, James Bradbury, and Richard Socher. Scalable language modeling: Wikitext-103 on a single gpu in 12 hours. SysML, 2018. +Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016. +Victor Pan. Structured matrices and polynomials: unified superfast algorithms. Springer Science & Business Media, 2001. +Victor Pan. Fast approximate computations with cauchy matrices and polynomials. Mathematics of Computation, 86(308):2799-2826, 2017. +Victor Y Pan. Transformations of matrix structures work again. Linear Algebra and Its Applications, 465:107-138, 2015. +Victor Y Pan. How bad are vandermonde matrices? SIAM Journal on Matrix Analysis and Applications, 37(2):676-694, 2016. +Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International conference on machine learning, pp. 1310-1318, 2013. +Jack Rae, Chris Dyer, Peter Dayan, and Timothy Lillicrap. Fast parametric learning with activation memorization. The International Conference on Machine Learning (ICML), 2018. +Prajit Ramachandran, Tom Le Paine, Pooya Khorrami, Mohammad Babaeizadeh, Shiyu Chang, Yang Zhang, Mark A Hasegawa-Johnson, Roy H Campbell, and Thomas S Huang. Fast generation for convolutional autoregressive models. arXiv preprint arXiv:1704.06001, 2017. +David W Romero, Anna Kuzina, Erik J Bekkers, Jakub M Tomczak, and Mark Hoogendoorn. Ckconv: Continuous kernel convolution for sequential data. arXiv preprint arXiv:2102.02611, 2021. +Yulia Rubanova, Tian Qi Chen, and David K Duvenaud. Latent ordinary differential equations for irregularly-sampled time series. In Advances in Neural Information Processing Systems, pp. 5321-5331, 2019. +T Konstantin Rusch and Siddhartha Mishra. *Unicornn: A recurrent model for learning very long time dependencies.* The International Conference on Machine Learning (ICML), 2021. +Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. PixelCNN++: Improving the pixelCNN with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017. +Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. Long range arena: A benchmark for efficient transformers. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=qVyeW-grC2k. +Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, et al. Mlp-mixer: An all-mlp architecture for vision. arXiv preprint arXiv:2105.01601, 2021. + +Trieu H Trinh, Andrew M Dai, Minh-Thang Luong, and Quoc V Le. Learning longer-term dependencies in RNNs with auxiliary losses. In The International Conference on Machine Learning (ICML), 2018. +Arnold Tustin. A method of analysing the behaviour of linear systems in terms of time series. Journal of the Institution of Electrical Engineers-Part IIA: Automatic Regulators and Servo Mechanisms, 94(1):130-142, 1947. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems (NeurIPS), 2017. +Aaron Voelker, Ivana Kajic, and Chris Eliasmith. Legendre memory units: Continuous-time representation in recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 15544-15553, 2019. +Aaron Russell Voelker. Dynamical systems in spiking neuromorphic hardware. PhD thesis, University of Waterloo, 2019. +Pete Warden. Speech commands: A dataset for limited-vocabulary speech recognition. *ArXiv*, abs/1804.03209, 2018. +Max A Woodbury. Inverting modified matrices. Memorandum report, 42:106, 1950. +Felix Wu, Angela Fan, Alexei Baevski, Yann N Dauphin, and Michael Auli. Pay less attention with lightweight and dynamic convolutions. In The International Conference on Learning Representations (ICLR), 2019. +Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In The Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Virtual Conference, volume 35, pp. 11106-11115. AAAI Press, 2021. + +# A DISCUSSION + +Related Work. Our work is most closely related to a line of work originally motivated by a particular biologically-inspired SSM, which led to mathematical models for addressing LRDs. Voelker (2019); Voelker et al. (2019) derived a non-trainable SSM motivated from approximating a neuromorphic spiking model, and Chilkuri & Eliasmith (2021) showed that it could be sped up at train time with a convolutional view. Gu et al. (2020a) extended this special case to a general continuous-time function approximation framework with several more special cases of $\mathbf{A}$ matrices designed for long-range dependencies. However, instead of using a true SSM, all of these works fixed a choice of $\mathbf{A}$ and built RNNs around it. Most recently, Gu et al. (2021) used the full (1) explicitly as a deep SSM model, exploring new conceptual views of SSMs, as well as allowing $\mathbf{A}$ to be trained. As mentioned in Section 1, their method used a naive instantiation of SSMs that suffered from an additional factor of $N$ in memory and $N^2$ in computation. + +Beyond this work, our technical contributions (Section 3) on the S4 parameterization and algorithms are applicable to a broader family of SSMs including these investigated in prior works, and our techniques for working with these models may be of independent interest. + +Implementation. The computational core of S4's training algorithm is the Cauchy kernel discussed in Sections 3.2 and 3.3 and Appendix C.3. As described in Appendix C.3 Proposition 5, there are many algorithms for it with differing computational complexities and sophistication. Our current implementation of S4 actually uses the naive $O(NL)$ algorithm which is easily parallelized on GPUs and has more easily accessible libraries allowing it to be implemented; we leverage the pykeops library for memory-efficient kernel operations. However, this library is a much more general library that may not be optimized for the Cauchy kernels used here, and we believe that a dedicated CUDA implementation can be more efficient. Additionally, as discussed in this work, there are asymptotically faster and numerically stable algorithms for the Cauchy kernel (Proposition 5). However, these algorithms are currently not implemented for GPUs due to a lack of previous applications that require them. We believe that more efficient implementations of these self-contained computational kernels + +are possible, and that S4 (and SSMs at large) may have significant room for further improvements in efficiency. + +Limitations and Future Directions. In this work, we show that S4 can address a wide variety of data effectively. However, it may not necessarily be the most suitable model for all types of data. For example, Table 7 still found a gap compared to Transformers for language modeling. An interesting future direction is exploring combinations of S4 with other sequence models to complement their strengths. We are excited about other directions, including continuing to explore the benefits of S4 on audio data (e.g. pre-training or generation settings), and generalizing HiPPO and S4 to higher-dimensional data for image and video applications. + +# B NUMERICAL INSTABILITY OF LSSL + +This section proves the claims made in Section 3.1 about prior work. We first derive the explicit diagonalization of the HiPPO matrix, confirming its instability because of exponentially large entries. We then discuss the proposed theoretically fast algorithm from (Gu et al., 2021) (Theorem 2) and show that it also involves exponentially large terms and thus cannot be implemented. + +# B.1 HIPPO DIAGONALIZATION + +Proof of Lemma 3.2. The HiPPO matrix (2) is equal, up to sign and conjugation by a diagonal matrix, to + +$$ +\boldsymbol {A} = \left[ \begin{array}{c c c c c c c c c} 1 & & & & & & & \\ - 1 & 2 & & & & & & \\ 1 & - 3 & 3 & & & & & \\ - 1 & 3 & - 5 & 4 & & & & \\ 1 & - 3 & 5 & - 7 & 5 & & & \\ - 1 & 3 & - 5 & 7 & - 9 & 6 & & \\ 1 & - 3 & 5 & - 7 & 9 & - 1 1 & 7 & \\ - 1 & 3 & - 5 & 7 & - 9 & 1 1 & - 1 3 & 8 \\ \vdots & & & & & & & \ddots \end{array} \right] +$$ + +$$ +\boldsymbol {A} _ {n k} = \left\{ \begin{array}{l l} (- 1) ^ {n - k} (2 k + 1) & n > k \\ k + 1 & n = k \\ 0 & n < k \end{array} \right.. +$$ + +Our goal is to show that this $A$ is diagonalized by the matrix + +$$ +\boldsymbol {V} = \left( \begin{array}{c} i + j \\ i - j \end{array} \right) _ {i j} = \left[ \begin{array}{c c c c c c c c} 1 & & & & & & & \\ 1 & 1 & & & & & & \\ 1 & 3 & 1 & & & & & \\ 1 & 6 & 5 & 1 & & & & \\ 1 & 1 0 & 1 5 & 7 & 1 & & & \\ 1 & 1 5 & 3 5 & 2 8 & 9 & 1 & & \\ \vdots & & & & & & \ddots \end{array} \right], +$$ + +or in other words that columns of this matrix are eigenvectors of $A$ . + +Concretely, we will show that the $j$ -th column of this matrix $\pmb{v}^{(j)}$ with elements + +$$ +\boldsymbol {v} _ {i} ^ {(j)} = \left\{ \begin{array}{l l} 0 & i < j \\ \binom {i + j} {i - j} = \binom {i + j} {2 j} & i \geq j \end{array} \right. +$$ + +is an eigenvector with eigenvalue $j + 1$ . In other words we must show that for all indices $k \in [N]$ + +$$ +\left(\boldsymbol {A} \boldsymbol {v} ^ {(j)}\right) _ {k} = \sum_ {i} \boldsymbol {A} _ {k i} \boldsymbol {v} _ {i} ^ {(j)} = (j + 1) \boldsymbol {v} _ {k} ^ {(j)}. \tag {7} +$$ + +If $k < j$ , then for all $i$ inside the sum, either $k < i$ or $i < j$ . In the first case $A_{ki} = 0$ and in the second case $v_i^{(j)} = 0$ , so both sides of equation (7) are equal to 0. + +It remains to show the case $k \geq j$ , which proceeds by induction on $k$ . Expanding equation (7) using the formula for $A$ yields + +$$ +(\boldsymbol {A} \boldsymbol {v}) _ {k} ^ {(j)} = \sum_ {i} \boldsymbol {A} _ {k i} \boldsymbol {v} _ {i} ^ {(j)} = \sum_ {i = j} ^ {k - 1} (- 1) ^ {k - i} (2 i + 1) \left( \begin{array}{c} i + j \\ 2 j \end{array} \right) + (k + 1) \left( \begin{array}{c} k + j \\ 2 j \end{array} \right). +$$ + +In the base case $k = j$ , the sum disappears and we are left with $(\mathbf{A}\mathbf{v}^{(j)})_j = (j + 1)\binom{2j}{2j} = (j + 1)\mathbf{v}_j^{(j)}$ as desired. + +Otherwise, the sum for $(\mathbf{A}\mathbf{v})_k^{(j)}$ is the same as the sum for $(\mathbf{A}\mathbf{v})_{k - 1}^{(j)}$ but with sign reversed and a few edge terms. The result follows from applying the inductive hypothesis and algebraic simplification: + +$$ +\begin{array}{l} (\boldsymbol {A} \boldsymbol {v}) _ {k} ^ {(j)} = - (\boldsymbol {A} \boldsymbol {v}) _ {k - 1} ^ {(j)} - (2 k - 1) \binom {k - 1 + j} {2 j} + k \binom {k - 1 + j} {2 j} + (k + 1) \binom {k + j} {2 j} \\ = - (j + 1) \binom {k - 1 + j} {2 j} - (k - 1) \binom {k - 1 + j} {2 j} + (k + 1) \binom {k + j} {2 j} \\ = - (j + k) \binom {k - 1 + j} {2 j} + (k + 1) \binom {k + j} {2 j} \\ = - (j + k) \frac {(k - 1 + j) !}{(k - 1 - j) ! (2 j) !} + (k + 1) \binom {k + j} {2 j} \\ = - \frac {(k + j) !}{(k - 1 - j) ! (2 j) !} + (k + 1) \binom {k + j} {2 j} \\ = - (k - j) \frac {(k + j) !}{(k - j) ! (2 j) !} + (k + 1) \binom {k + j} {2 j} \\ = (j - k) (k + 1) \binom {k + j} {2 j} + (k + 1) \binom {k + j} {2 j} \\ = (j + 1) \boldsymbol {v} _ {k} ^ {(j)}. \\ \end{array} +$$ + +![](images/04583d87162d60583786412cb22877d0ba53f3a6409f48b445aa4d5374a591f5.jpg) + +# B.2 FAST BUT UNSTABLE LSSL ALGORITHM + +Instead of diagonalization, Gu et al. (2021, Theorem 2) proposed a sophisticated fast algorithm to compute + +$$ +K _ {L} (\overline {{A}}, \overline {{B}}, \overline {{C}}) = (\overline {{C B}}, \overline {{C A B}}, \dots , \overline {{C A}} ^ {L - 1} \overline {{B}}). +$$ + +This algorithm runs in $O(N \log^2 N + L \log L)$ operations and $O(N + L)$ space. However, we now show that this algorithm is also numerically unstable. + +There are several reasons for the instability of this algorithm, but most directly we can pinpoint a particular intermediate quantity that they use. + +Definition 1. The fast LSSL algorithm computes coefficients of $p(x)$ , the characteristic polynomial of $A$ , as an intermediate computation. Additionally, it computes the coefficients of its inverse, $p(x)^{-1}$ (mod $x^L$ ). + +We now claim that this quantity is numerically unfeasible. We narrow down to the case when $\overline{A} = I$ is the identity matrix. Note that this case is actually in some sense the most typical case: when discretizing the continuous-time SSM to discrete-time by a step-size $\Delta$ , the discretized transition matrix $\overline{A}$ is brought closer to the identity. For example, with the Euler discretization $\overline{A} = I + \Delta A$ , we have $\overline{A} \to I$ as the step size $\Delta \to 0$ . + +Lemma B.1. When $\overline{A} = I$ , the fast LSSL algorithm requires computing terms exponentially large in $N$ . + +Proof. The characteristic polynomial of $I$ is + +$$ +p (x) = \det | \boldsymbol {I} - x \boldsymbol {I} | = (1 - x) ^ {N}. +$$ + +These coefficients have size up to $\binom{N}{\frac{N}{2}} \approx \frac{2^N}{\sqrt{\pi N/2}}$ . + +The inverse of $p(x)$ has even larger coefficients. It can be calculated in closed form by the generalized binomial formula: + +$$ +(1 - x) ^ {- N} = \sum_ {k = 0} ^ {\infty} \binom {N + k - 1} {k} x ^ {k}. +$$ + +Taking this (mod $x^L$ ), the largest coefficient is + +$$ +\left( \begin{array}{c} N + L - 2 \\ L - 1 \end{array} \right) = \left( \begin{array}{c} N + L - 2 \\ N - 1 \end{array} \right) = \frac {(L - 1) (L - 2) \ldots (L - N + 1)}{(N - 1) !}. +$$ + +When $L = N - 1$ this is + +$$ +\left( \begin{array}{c} 2 (N - 1) \\ N - 1 \end{array} \right) \approx \frac {2 ^ {2 N}}{\sqrt {\pi N}} +$$ + +already larger than the coefficients of $(1 - x)^{N}$ , and only increases as $L$ grows. + +![](images/87fd1cbf66b590a25ebcc14e7a026dba93e38f8ea90feae3f73c4a4a9af6b0a8.jpg) + +# C S4 ALGORITHM DETAILS + +This section proves the results of Section 3.3, providing complete details of our efficient algorithms for S4. + +Appendices C.1 to C.3 prove Theorems 1 to 3 respectively. + +# C.1 NPLR REPRESENTATIONS OF HIPPO MATRICES + +We first prove Theorem 1, showing that all HiPPO matrices for continuous-time memory fall under the S4 normal plus low-rank (NPLR) representation. + +Proof of Theorem 1. We consider each of the three cases HiPPO-LagT, HiPPO-LegT, and HiPPO-LegS separately. Note that the primary HiPPO matrix defined in this work (equation (2)) is the HiPPO-LegT matrix. + +HiPPO-LagT. The HiPPO-LagT matrix is simply + +$$ +\boldsymbol {A} _ {n k} = \left\{ \begin{array}{l l} 0 & n < k \\ - \frac {1}{2} & n = k \\ - 1 & n > k \end{array} \right. +$$ + +$$ +\boldsymbol {A} = - \left[ \begin{array}{c c c c c} \frac {1}{2} & & & & \dots \\ 1 & \frac {1}{2} & & & \\ 1 & 1 & \frac {1}{2} & & \\ 1 & 1 & 1 & \frac {1}{2} & \\ \vdots & & & & \ddots \end{array} \right]. +$$ + +Adding the matrix of all $\frac{1}{2}$ , which is rank 1, yields + +$$ +- \left[ \begin{array}{c c c c} & - \frac {1}{2} & - \frac {1}{2} & - \frac {1}{2} \\ \frac {1}{2} & & - \frac {1}{2} & - \frac {1}{2} \\ \frac {1}{2} & \frac {1}{2} & & - \frac {1}{2} \\ \frac {1}{2} & \frac {1}{2} & \frac {1}{2} \end{array} \right]. +$$ + +This matrix is now skew-symmetric. Skew-symmetric matrices are a particular case of normal matrices with pure-imaginary eigenvalues. + +Gu et al. (2020a) also consider a case of HiPPO corresponding to the generalized Laguerre polynomials that generalize the above HiPPO-LagT case. In this case, the matrix $A$ (up to conjugation by a diagonal matrix) ends up being close to the above matrix, but with a different element on the diagonal. After adding the rank-1 correction, it becomes the above skew-symmetric matrix plus a multiple of + +the identity. Thus after diagonalization by the same matrix as in the LagT case, it is still reduced to diagonal plus low-rank (DPLR) form, where the diagonal is now pure imaginary plus a real constant. + +HiPPO-LegS. We restate the formula from equation (2) for convenience. + +$$ +\boldsymbol {A} _ {n k} = - \left\{ \begin{array}{l l} (2 n + 1) ^ {1 / 2} (2 k + 1) ^ {1 / 2} & \text {i f} n > k \\ n + 1 & \text {i f} n = k \\ 0 & \text {i f} n < k \end{array} \right.. +$$ + +Adding $\frac{1}{2} (2n + 1)^{1 / 2}(2k + 1)^{1 / 2}$ to the whole matrix gives + +$$ +- \left\{ \begin{array}{l l} \frac {1}{2} (2 n + 1) ^ {1 / 2} (2 k + 1) ^ {1 / 2} & \text {i f} n > k \\ \frac {1}{2} & \text {i f} n = k \\ - \frac {1}{2} (2 n + 1) ^ {1 / 2} (2 k + 1) ^ {1 / 2} & \text {i f} n < k \end{array} \right. +$$ + +Note that this matrix is not skew-symmetric, but is $\frac{1}{2}\pmb {I} + \pmb{S}$ where $\pmb{S}$ is a skew-symmetric matrix. This is diagonalizable by the same unitary matrix that diagonalizes $S$ + +# HiPPO-LegT. + +Up to the diagonal scaling, the LegT matrix is + +$$ +\boldsymbol {A} = - \left[ \begin{array}{c c c c c} 1 & - 1 & 1 & - 1 & \dots \\ 1 & 1 & - 1 & 1 \\ 1 & 1 & 1 & - 1 \\ 1 & 1 & 1 & 1 \\ \vdots & & & & \ddots \end{array} \right]. +$$ + +By adding $-1$ to this matrix and then the matrix + +$$ +\left[ \begin{array}{c c c} & & \\ 2 & 2 & \\ 2 & 2 & \end{array} \right] +$$ + +the matrix becomes + +$$ +\left[ \begin{array}{c c c c} & - 2 & & - 2 \\ 2 & & & \\ & & - 2 \\ 2 & & 2 \end{array} \right] +$$ + +which is skew-symmetric. In fact, this matrix is the inverse of the Chebyshev Jacobi. + +An alternative way to see this is as follows. The LegT matrix is the inverse of the matrix + +$$ +\left[ \begin{array}{c c c c} - 1 & 1 & & 0 \\ - 1 & & 1 & \\ & - 1 & & 1 \\ & & - 1 & - 1 \end{array} \right] +$$ + +This can obviously be converted to a skew-symmetric matrix by adding a rank 2 term. The inverses of these matrices are also rank-2 differences from each other by the Woodbury identity. + +A final form is + +$$ +\left[ \begin{array}{c c c c} - 1 & 1 & - 1 & 1 \\ - 1 & - 1 & 1 & - 1 \\ - 1 & - 1 & - 1 & 1 \\ - 1 & - 1 & - 1 & - 1 \end{array} \right] + \left[ \begin{array}{c c c c} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \end{array} \right] = \left[ \begin{array}{c c c c} 0 & 1 & 0 & 1 \\ - 1 & 0 & 1 & 0 \\ 0 & - 1 & 0 & 1 \\ - 1 & 0 & - 1 & 0 \end{array} \right] +$$ + +This has the advantage that the rank-2 correction is symmetric (like the others), but the normal skew-symmetric matrix is now 2-quasiseparable instead of 1-quasiseparable. + +![](images/11ad2105193390f8a63da9376869253bb2cb43fa8dc33cd49e18f91dfe302f14.jpg) + +# C.2 COMPUTING THE S4 RECURRENT VIEW + +We prove Theorem 2 showing the efficiency of the S4 parameterization for computing one step of the recurrent representation (Section 2.3). + +Recall that without loss of generality, we can assume that the state matrix $\mathbf{A} = \mathbf{\Lambda} - \mathbf{P}\mathbf{Q}^{*}$ is diagonal plus low-rank (DPLR), potentially over $\mathbb{C}$ . Our goal in this section is to explicitly write out a closed form for the discretized matrix $\overline{\mathbf{A}}$ . + +Recall from equation (3) that + +$$ +\overline {{\boldsymbol {A}}} = (\boldsymbol {I} - \Delta / 2 \cdot \boldsymbol {A}) ^ {- 1} (\boldsymbol {I} + \Delta / 2 \cdot \boldsymbol {A}) +$$ + +$$ +\overline {{\boldsymbol {B}}} = (\boldsymbol {I} - \Delta / 2 \cdot \boldsymbol {A}) ^ {- 1} \Delta \boldsymbol {B}. +$$ + +We first simplify both terms in the definition of $\overline{A}$ independently. + +Forward discretization. The first term is essentially the Euler discretization motivated in Section 2.3. + +$$ +\begin{array}{l} \boldsymbol {I} + \frac {\Delta}{2} \boldsymbol {A} = \boldsymbol {I} + \frac {\Delta}{2} (\boldsymbol {\Lambda} - \boldsymbol {P Q} ^ {*}) \\ = \frac {\Delta}{2} \left[ \frac {2}{\Delta} \boldsymbol {I} + (\boldsymbol {\Lambda} - \boldsymbol {P Q} ^ {*}) \right] \\ = \frac {\Delta}{2} A _ {0} \\ \end{array} +$$ + +where $A_0$ is defined as the term in the final brackets. + +Backward discretization. The second term is known as the Backward Euler's method. Although this inverse term is normally difficult to deal with, in the DPLR case we can simplify it using Woodbury's Identity (Proposition 4). + +$$ +\begin{array}{l} \left(\boldsymbol {I} - \frac {\Delta}{2} \boldsymbol {A}\right) ^ {- 1} = \left(\boldsymbol {I} - \frac {\Delta}{2} (\boldsymbol {\Lambda} - \boldsymbol {P Q} ^ {*})\right) ^ {- 1} \\ = \frac {2}{\Delta} \left[ \frac {2}{\Delta} - \boldsymbol {\Lambda} + \boldsymbol {P Q} ^ {*} \right] ^ {- 1} \\ = \frac {2}{\Delta} \left[ D - D P \left(I + Q ^ {*} D P\right) ^ {- 1} Q ^ {*} D \right] \\ = \frac {2}{\Delta} A _ {1} \\ \end{array} +$$ + +where $D = \left(\frac{2}{\Delta} - \mathbf{A}\right)^{-1}$ and $A_{1}$ is defined as the term in the final brackets. Note that $(1 + Q^{*}DP)$ is actually a scalar in the case when the low-rank term has rank 1. + +S4 Recurrence. Finally, the full bilinear discretization can be rewritten in terms of these matrices as + +$$ +\overline {{A}} = A _ {1} A _ {0} +$$ + +$$ +\overline {{B}} = \frac {2}{\Delta} A _ {1} \Delta B = 2 A _ {1} B. +$$ + +The discrete-time SSM (3) becomes + +$$ +\begin{array}{l} x _ {k} = \overline {{\boldsymbol {A}}} x _ {k - 1} + \overline {{\boldsymbol {B}}} u _ {k} \\ = \boldsymbol {A} _ {1} \boldsymbol {A} _ {0} x _ {k - 1} + 2 \boldsymbol {A} _ {1} \boldsymbol {B} u _ {k} \\ \end{array} +$$ + +$$ +y _ {k} = C x _ {k}. +$$ + +Note that $A_0, A_1$ are accessed only through matrix-vector multiplications. Since they are both DPLR, they have $O(N)$ matrix-vector multiplication, showing Theorem 2. + +# C.3 COMPUTING THE CONVOLUTIONAL VIEW + +The most involved part of using SSMs efficiently is computing $\overline{\mathbf{K}}$ . This algorithm was sketched in Section 3.2 and is the main motivation for the S4 parameterization. In this section, we define the necessary intermediate quantities and prove the main technical result. + +The algorithm for Theorem 3 falls in roughly three stages, leading to Algorithm 1. Assuming $\mathbf{A}$ has been conjugated into diagonal plus low-rank form, we successively simplify the problem of computing $\overline{\mathbf{K}}$ by applying the techniques outlined in Section 3.2. + +Remark C.1. We note that for the remainder of this section, we transpose $C$ to be a column vector of shape $\mathbb{C}^N$ or $\mathbb{C}^{N\times 1}$ instead of matrix or row vector $\mathbb{C}^{1\times N}$ as in (1). In other words the SSM is + +$$ +\begin{array}{l} x ^ {\prime} (t) = \boldsymbol {A} x (t) + \boldsymbol {B} u (t) \\ y (t) = \boldsymbol {C} ^ {*} x (t) + \boldsymbol {D} u (t). \\ \end{array} +$$ + +This convention is made so that $C$ has the same shape as $B, P, Q$ and simplifies the implementation of S4. + +Reduction 0: Diagonalization By Lemma 3.1, we can switch the representation by conjugating with any unitary matrix. For the remainder of this section, we can assume that $\mathbf{A}$ is (complex) diagonal plus low-rank (DPLR). + +Note that unlike diagonal matrices, a DPLR matrix does not lend itself to efficient computation of $\overline{K}$ . The reason is that $\overline{K}$ computes terms $\overline{C}^* \overline{A}^i \overline{B}$ which involve powers of the matrix $\overline{A}$ . These are trivially computable when $\overline{A}$ is diagonal, but is no longer possible for even simple modifications to diagonal matrices such as DPLR. + +Reduction 1: SSM Generating Function To address the problem of computing powers of $\overline{A}$ , we introduce another technique. Instead of computing the SSM convolution filter $\overline{K}$ directly, we introduce a generating function on its coefficients and compute evaluations of it. + +Definition 2 (SSM Generating Function). We define the following quantities: + +- The SSM convolution function is $\mathcal{K}(\overline{A},\overline{B},\overline{C}) = (\overline{C}^{*}\overline{B},\overline{C}^{*}\overline{AB},\dots)$ and the (truncated) SSM filter of length $L$ + +$$ +\mathcal {K} _ {L} (\overline {{A}}, \overline {{B}}, \overline {{C}}) = \left(\overline {{C}} ^ {*} \overline {{B}}, \overline {{C}} ^ {*} \overline {{A B}}, \dots , \overline {{C}} ^ {*} \overline {{A}} ^ {L - 1} \overline {{B}}\right) \in \mathbb {R} ^ {L} \tag {9} +$$ + +- The SSM generating function at node $z$ is + +$$ +\hat {\mathcal {K}} (z; \bar {A}, \bar {B}, \bar {C}) \in \mathbb {C} := \sum_ {i = 0} ^ {\infty} \bar {C} ^ {*} \bar {A} ^ {i} \bar {B} z ^ {i} = \bar {C} ^ {*} (I - \bar {A} z) ^ {- 1} \bar {B} \tag {10} +$$ + +and the truncated SSM generating function at node $z$ is + +$$ +\hat {\mathcal {K}} _ {L} (z; \overline {{\boldsymbol {A}}}, \overline {{\boldsymbol {B}}}, \overline {{\boldsymbol {C}}}) ^ {*} \in \mathbb {C} := \sum_ {i = 0} ^ {L - 1} \overline {{\boldsymbol {C}}} ^ {*} \overline {{\boldsymbol {A}}} ^ {i} \overline {{\boldsymbol {B}}} z ^ {i} = \overline {{\boldsymbol {C}}} ^ {*} (\boldsymbol {I} - \overline {{\boldsymbol {A}}} ^ {L} z ^ {L}) (\boldsymbol {I} - \overline {{\boldsymbol {A}}} z) ^ {- 1} \overline {{\boldsymbol {B}}} \tag {11} +$$ + +- The truncated SSM generating function at nodes $\Omega \in \mathbb{C}^M$ is + +$$ +\hat {\mathcal {K}} _ {L} (\Omega ; \bar {\boldsymbol {A}}, \bar {\boldsymbol {B}}, \bar {\boldsymbol {C}}) \in \mathbb {C} ^ {M} := \left(\hat {\mathcal {K}} _ {L} \left(\omega_ {k}; \bar {\boldsymbol {A}}, \bar {\boldsymbol {B}}, \bar {\boldsymbol {C}}\right)\right) _ {k \in [ M ]} \tag {12} +$$ + +Intuitively, the generating function essentially converts the SSM convolution filter from the time domain to frequency domain. Importantly, it preserves the same information, and the desired SSM convolution filter can be recovered from evaluations of its generating function. + +Lemma C.2. The SSM function $\mathcal{K}_L(\overline{A},\overline{B},\overline{C})$ can be computed from the SSM generating function $\hat{\mathcal{K}}_L(\Omega ;\overline{A},\overline{B},\overline{C})$ at the roots of unity $\Omega = \{\exp (-2\pi i\frac{k}{L}:k\in [L]\}$ stably in $O(L\log L)$ operations. + +Proof. For convenience define + +$$ +\begin{array}{l} \overline {{\boldsymbol {K}}} = \mathcal {K} _ {L} (\overline {{\boldsymbol {A}}}, \overline {{\boldsymbol {B}}}, \overline {{\boldsymbol {C}}}) \\ \hat {\boldsymbol {K}} = \hat {\mathcal {K}} _ {L} (\Omega ; \overline {{\boldsymbol {A}}}, \overline {{\boldsymbol {B}}}, \overline {{\boldsymbol {C}}}) \\ \hat {\boldsymbol {K}} (z) = \hat {\mathcal {K}} _ {L} (z; \overline {{\boldsymbol {A}}}, \overline {{\boldsymbol {B}}}, \overline {{\boldsymbol {C}}}). \\ \end{array} +$$ + +Note that + +$$ +\hat {\boldsymbol {K}} _ {j} = \sum_ {k = 0} ^ {L - 1} \overline {{\boldsymbol {K}}} _ {k} \exp \left(- 2 \pi i \frac {j k}{L}\right). +$$ + +Note that this is exactly the same as the Discrete Fourier Transform (DFT): + +$$ +\hat {\boldsymbol {K}} = \mathcal {F} _ {L} \boldsymbol {K}. +$$ + +Therefore $\pmb{K}$ can be recovered from $\hat{\pmb{K}}$ with a single inverse DFT, which requires $O(L\log L)$ operations with the Fast Fourier Transform (FFT) algorithm. + +Reduction 2: Woodbury Correction The primary motivation of Definition 2 is that it turns powers of $\overline{A}$ into a single inverse of $\overline{A}$ (equation (10)). While DPLR matrices cannot be powered efficiently due to the low-rank term, they can be inverted efficiently by the well-known Woodbury identity. + +Proposition 4 (Binomial Inverse Theorem or Woodbury matrix identity Woodbury (1950); Golub & Van Loan (2013)). Over a commutative ring $\mathcal{R}$ , let $\pmb{A} \in \mathcal{R}^{N \times N}$ and $\pmb{U}, \pmb{V} \in \mathcal{R}^{N \times p}$ . Suppose $\pmb{A}$ and $\pmb{A} + \pmb{U}\pmb{V}^*$ are invertible. Then $\pmb{I}_p + \pmb{V}^*\pmb{A}^{-1}\pmb{U} \in \mathcal{R}^{p \times p}$ is invertible and + +$$ +\left(\boldsymbol {A} + \boldsymbol {U} \boldsymbol {V} ^ {*}\right) ^ {- 1} = \boldsymbol {A} ^ {- 1} - \boldsymbol {A} ^ {- 1} \boldsymbol {U} \left(\boldsymbol {I} _ {p} + \boldsymbol {V} ^ {*} \boldsymbol {A} ^ {- 1} \boldsymbol {U}\right) ^ {- 1} \boldsymbol {V} ^ {*} \boldsymbol {A} ^ {- 1} +$$ + +With this identity, we can convert the SSM generating function on a DPLR matrix $\mathbf{A}$ into one on just its diagonal component. + +Lemma C.3. Let $A = \Lambda - PQ^*$ be a diagonal plus low-rank representation. Then for any root of unity $z \in \Omega$ , the truncated generating function satisfies + +$$ +\begin{array}{l} \hat {\boldsymbol {K}} (z) = \frac {2}{1 + z} \left[ \tilde {\boldsymbol {C}} ^ {*} \boldsymbol {R} (z) \boldsymbol {B} - \tilde {\boldsymbol {C}} ^ {*} \boldsymbol {R} (z) \boldsymbol {P} \left(1 + \boldsymbol {Q} ^ {*} \boldsymbol {R} (z) \boldsymbol {P}\right) ^ {- 1} \boldsymbol {Q} ^ {*} \boldsymbol {R} (z) \boldsymbol {B} \right] \\ \tilde {\boldsymbol {C}} = \boldsymbol {C} (\boldsymbol {I} - \overline {{\boldsymbol {A}}} ^ {L}) \\ \boldsymbol {R} (z; \boldsymbol {\Lambda}) = \left(\frac {2}{\Delta} \frac {1 - z}{1 + z} - \boldsymbol {\Lambda}\right) ^ {- 1}. \\ \end{array} +$$ + +Proof. Directly expanding Definition 2 yields + +$$ +\begin{array}{l} \mathcal {K} _ {L} (z; \overline {{A}}, \overline {{B}}, \overline {{C}}) = \overline {{C}} ^ {*} \overline {{B}} + \overline {{C}} ^ {*} \overline {{A B}} z + \dots + \overline {{C}} ^ {*} \overline {{A}} ^ {L - 1} \overline {{B}} z ^ {L - 1} \\ = \overline {{C}} ^ {*} \left(\boldsymbol {I} - \overline {{\boldsymbol {A}}} ^ {L}\right) \left(\boldsymbol {I} - \overline {{\boldsymbol {A}}} z\right) ^ {- 1} \overline {{\boldsymbol {B}}} \\ = \tilde {C} ^ {*} (I - \overline {{A}} z) ^ {- 1} \overline {{B}} \\ \end{array} +$$ + +where $\tilde{C}^{*} = C^{*}\left(I - \overline{A}^{L}\right)$ + +We can now explicitly expand the discretized SSM matrices $\overline{A}$ and $\overline{B}$ back in terms of the original SSM parameters $A$ and $B$ . Lemma C.4 provides an explicit formula, which allows further simplifying + +$$ +\begin{array}{l} \tilde {C} ^ {*} (I - \bar {A} z) ^ {- 1} \bar {B} = \frac {2}{1 + z} \tilde {C} ^ {*} \left(\frac {2}{\Delta} \frac {1 - z}{1 + z} - A\right) ^ {- 1} B \\ = \frac {2}{1 + z} \tilde {C} ^ {*} \left(\frac {2}{\Delta} \frac {1 - z}{1 + z} - \boldsymbol {\Lambda} + \boldsymbol {P Q} ^ {*}\right) ^ {- 1} \boldsymbol {B} \\ = \frac {2}{1 + z} \left[ \tilde {\boldsymbol {C}} ^ {*} \boldsymbol {R} (z) \boldsymbol {B} - \tilde {\boldsymbol {C}} ^ {*} \boldsymbol {R} (z) \boldsymbol {P} \left(1 + \boldsymbol {Q} ^ {*} \boldsymbol {R} (z) \boldsymbol {P}\right) ^ {- 1} \boldsymbol {Q} ^ {*} \boldsymbol {R} (z) \boldsymbol {B} \right]. \\ \end{array} +$$ + +The last line applies the Woodbury Identity (Proposition 4) where $\pmb{R}(z) = \left(\frac{2}{\Delta}\frac{1 - z}{1 + z} - \Lambda\right)^{-1}$ . + +The previous proof used the following self-contained result to back out the original SSM matrices from the discretization. + +Lemma C.4. Let $\overline{A},\overline{B}$ be the SSM matrices $A,B$ discretized by the bilinear discretization with step size $\Delta$ . Then + +$$ +C ^ {*} (I - \bar {A} z) ^ {- 1} \bar {B} = \frac {2 \Delta}{1 + z} C ^ {*} \left[ 2 \frac {1 - z}{1 + z} - \Delta A \right] ^ {- 1} B +$$ + +Proof. Recall that the bilinear discretization that we use (equation (3)) is + +$$ +\overline {{A}} = \left(I - \frac {\Delta}{2} A\right) ^ {- 1} \left(I + \frac {\Delta}{2} A\right) +$$ + +$$ +\overline {{B}} = \left(\boldsymbol {I} - \frac {\Delta}{2} \boldsymbol {A}\right) ^ {- 1} \Delta \boldsymbol {B} +$$ + +The result is proved algebraic manipulations. + +$$ +\begin{array}{l} \boldsymbol {C} ^ {*} \left(\boldsymbol {I} - \overline {{\boldsymbol {A}}} z\right) ^ {- 1} \overline {{\boldsymbol {B}}} = \boldsymbol {C} ^ {*} \left[ \left(\boldsymbol {I} - \frac {\Delta}{2} \boldsymbol {A}\right) ^ {- 1} \left(\boldsymbol {I} - \frac {\Delta}{2} \boldsymbol {A}\right) - \left(\boldsymbol {I} - \frac {\Delta}{2} \boldsymbol {A}\right) ^ {- 1} \left(\boldsymbol {I} + \frac {\Delta}{2} \boldsymbol {A}\right) z \right] ^ {- 1} \overline {{\boldsymbol {B}}} \\ = C ^ {*} \left[ \left(\boldsymbol {I} - \frac {\Delta}{2} \boldsymbol {A}\right) - \left(\boldsymbol {I} + \frac {\Delta}{2} \boldsymbol {A}\right) z \right] ^ {- 1} \left(\boldsymbol {I} - \frac {\Delta}{2} \boldsymbol {A}\right) \overline {{\boldsymbol {B}}} \\ = C ^ {*} \left[ \boldsymbol {I} (1 - z) - \frac {\Delta}{2} \boldsymbol {A} (1 + z) \right] ^ {- 1} \Delta \boldsymbol {B} \\ = \frac {\Delta}{1 - z} C ^ {*} \left[ I - \frac {\Delta A}{2 \frac {1 - z}{1 + z}} \right] ^ {- 1} B \\ = \frac {2 \Delta}{1 + z} C ^ {*} \left[ 2 \frac {1 - z}{1 + z} I - \Delta A \right] ^ {- 1} B \\ \end{array} +$$ + +![](images/544a994c23608df70f8204cc10477a7f54cffbe8714a73cdfaa6b5931e1bffbd.jpg) + +Note that in the S4 parameterization, instead of constantly computing $\tilde{C} = C\left(I - \overline{A}^L\right)$ , we can simply reparameterize our parameters to learn $\tilde{C}$ directly instead of $C$ , saving a minor computation cost and simplifying the algorithm. + +**Reduction 3: Cauchy Kernel** We have reduced the original problem of computing $\overline{K}$ to the problem of computing the SSM generating function $\hat{\mathcal{K}}_L(\Omega; \overline{A}, \overline{B}, \overline{C})$ in the case that $\overline{A}$ is a diagonal matrix. We show that this is exactly the same as a Cauchy kernel, which is a well-studied problem with fast and stable numerical algorithms. + +Definition 3. A Cauchy matrix or kernel on nodes $\Omega = (\omega_i) \in \mathbb{C}^M$ and $\Lambda = (\lambda_j) \in \mathbb{C}^N$ is + +$$ +\boldsymbol {M} \in \mathbb {C} ^ {M \times N} = \boldsymbol {M} (\Omega , \Lambda) = (\boldsymbol {M} _ {i j}) _ {i \in [ M ], j \in [ N ]} \qquad \boldsymbol {M} _ {i j} = \frac {1}{\omega_ {i} - \lambda_ {j}}. +$$ + +The computation time of a Cauchy matrix-vector product of size $M \times N$ is denoted by $\mathcal{C}(M, N)$ . + +Computing with Cauchy matrices is an extremely well-studied problem in numerical analysis, with both fast arithmetic algorithms and fast numerical algorithms based on the famous Fast Multipole Method (FMM) (Pan, 2001; 2015; 2017). + +Proposition 5 (Cauchy). A Cauchy kernel requires $O(M + N)$ space, and operation count + +$$ +\mathcal {C} (M, N) = \left\{ \begin{array}{l l} O \left(M N\right) & \text {n a i v e l y} \\ O \left(\left(M + N\right) \log^ {2} (M + N)\right) & \text {i n e x a c t a r i t h m e t i c} \\ O \left(\left(M + N\right) \log (M + N) \log \frac {1}{\varepsilon}\right) & \text {n u m e r i c a l l y t o p r e c i s i o n \varepsilon}. \end{array} \right. +$$ + +Corollary C.5. Evaluating $Q^{*}R(\Omega; \Lambda)P$ (defined in Lemma C.3) for any set of nodes $\Omega \in \mathbb{C}^{L}$ , diagonal matrix $\Lambda$ , and vectors $P, Q$ can be computed in $\mathcal{C}(L, N)$ operations and $O(L + N)$ space, where $\mathcal{C}(L, N) = \tilde{O}(L + N)$ is the cost of a Cauchy matrix-vector multiplication. + +Proof. For any fixed $\omega \in \Omega$ , we want to compute $\sum_{j} \frac{q_j^* p_j}{\omega - \lambda_j}$ . Computing this over all $\omega_i$ is therefore exactly a Cauchy matrix-vector multiplication. + +This completes the proof of Theorem 3. In Algorithm 1, note that the work is dominated by Step 2, which has a constant number of calls to a black-box Cauchy kernel, with complexity given by Proposition 5. + +# D EXPERIMENT DETAILS AND FULL RESULTS + +This section contains full experimental procedures and extended results and citations for our experimental evaluation in Section 4. Appendix D.1 corresponds to benchmarking results in Section 4.1, Appendix D.2 corresponds to LRD experiments (LRA and Speech Commands) in Section 4.2, and Appendix D.3 corresponds to the general sequence modeling experiments (generation, image classification, forecasting) in Section 4.3. + +# D.1 BENCHMARKING + +Benchmarking results from Table 1 and Table 2 were tested on a single A100 GPU. + +Benchmarks against LSSL For a given dimension $H$ , a single LSSL or S4 layer was constructed with $H$ hidden features. For LSSL, the state size $N$ was set to $\bar{H}$ as done in (Gu et al., 2021). For S4, the state size $N$ was set to parameter-match the LSSL, which was a state size of $\frac{N}{4}$ due to differences in the parameterization. Table 1 benchmarks a single forward+backward pass of a single layer. + +Benchmarks against Efficient Transformers Following (Tay et al., 2021), the Transformer models had 4 layers, hidden dimension 256 with 4 heads, query/key/value projection dimension 128, and batch size 32, for a total of roughly $600k$ parameters. The S4 model was parameter tied while keeping the depth and hidden dimension constant (leading to a state size of $N = 256$ ). + +We note that the relative orderings of these methods can vary depending on the exact hyperparameter settings. + +# D.2 LONG-RANGE DEPENDENCIES + +This section includes information for reproducing our experiments on the Long-Range Arena and Speech Commands long-range dependency tasks. + +Long Range Arena Table 8 contains extended results table with all 11 methods considered in (Tay et al., 2021). + +For the S4 model, hyperparameters for all datasets are reported in Table 9. For all datasets, we used the AdamW optimizer with a constant learning rate schedule with decay on validation plateau. However, the learning rate on HiPPO parameters (in particular $\Lambda, P, Q, B, C, \Delta$ ) were reduced to a maximum starting LR of 0.001, which improves stability since the HiPPO equation is crucial to performance. + +The S4 state size was always fixed to $N = 64$ + +As S4 is a sequence-to-sequence model with output shape (batch, length, dimension) and LRA tasks are classification, mean pooling along the length dimension was applied after the last layer. + +We note that most of these results were trained for far longer than what was necessary to achieve SotA results (e.g., the Image task reaches SotA in 1 epoch). Results often keep improving with longer training times. + +Hardware. All models were run on single GPU. Some tasks used an A100 GPU (notably, the Path-X experiments), which has a larger max memory of 40Gb. To reproduce these on smaller GPUs, the batch size can be reduced or gradients can be accumulated for two batches. + +**Path-X.** We remark that an earlier version of this paper reported a higher score for Path-X. This earlier version used a different variant of the dataset, due to a misunderstanding of the properties of the dataset. More specifically, we found that S4 scored $93.68\%$ accuracy on a version that involved + +Table 8: Full results for the Long Range Arena (LRA) benchmark for long-range dependencies in sequence models. (Top): Original Transformer variants in LRA. (Bottom): Other models reported in the literature. + +
ModelLISTOPSTEXTRETRIEVALIMAGEPATHFINDERPATH-XAVG
Random10.0050.0050.0010.0050.0050.0036.67
Transformer36.3764.2757.4642.4471.4053.66
Local Attention15.8252.9853.3941.4666.6346.71
Sparse Trans.17.0763.5859.5944.2471.7151.03
Longformer35.6362.8556.8942.2269.7152.88
Linformer35.7053.9452.2738.5676.3451.14
Reformer37.2756.1053.4038.0768.5050.56
Sinkhorn Trans.33.6761.2053.8341.2367.4551.23
Synthesizer36.9961.6854.6741.6169.4552.40
BigBird36.0564.0259.2940.8374.8754.17
Linear Trans.16.1365.9053.0942.3475.3050.46
Performer18.0165.4053.8242.7777.0551.18
FNet35.3365.1159.6138.6777.8054.42
Nyströmformer37.1565.5279.5641.5870.9457.46
Luna-25637.2564.5779.2947.3877.7259.37
S458.3576.0287.0987.2686.0588.1080.48
+ +Table 9: The values of the best hyperparameters found for classification datasets; LRA (Top) and images/speech (Bottom). LR is learning rate and WD is weight decay. BN and LN refer to Batch Normalization and Layer Normalization. + +
DepthFeatures HNormPre-normDropoutLRBatch SizeEpochsWDPatience
ListOps6128BNFalse00.01100500.015
Text464BNTrue00.001502005
Retrieval6256BNTrue00.0026420020
Image6512LNFalse0.20.004502000.0120
Pathfinder6256BNTrue0.10.004100200010
Path-X6256BNTrue0.00.000532100020
CIFAR-1061024LNFalse0.250.01502000.0120
Speech Commands (MFCC)4256LNFalse0.20.011005005
Speech Commands (Raw)6128BNTrue0.10.0120150010
+ +taking the $256 \times 256$ resolution version of the Pathfinder dataset and averaging every $2 \times 2$ square; we erroneously thought that this version of the dataset was equivalent to the original Path-X. + +After discussions with the LRA authors, we discovered that this is not equivalent to the $128 \times 128$ resolution Pathfinder dataset (the correct Path-X), which is in fact much harder. In fact, Path-X is so difficult that a 2D CNN without global receptive field (e.g. ResNet-18 or ResNet-34) also cannot achieve above chance. This fact led to the original misunderstanding, as we could not solve this image classification task even with a ResNet and thought the data might have errors. + +Speech Commands We provide details of sweeps run for baseline methods run by us—numbers for all others method are taken from Gu et al. (2021). The best hyperparameters used for S4 are included in Table 9. + +Transformer (Vaswani et al., 2017) For MFCC, we swept the number of model layers $\{2,4\}$ , dropout $\{0,0.1\}$ and learning rates $\{0.001,0.0005\}$ . We used 8 attention heads, model dimension 128, prenorm, positional encodings, and trained for 150 epochs with a batch size of 100. For Raw, the Transformer model's memory usage made training impossible. + +Performer (Choromanski et al., 2020) For MFCC, we swept the number of model layers $\{2,4\}$ , dropout $\{0,0.1\}$ and learning rates $\{0.001,0.0005\}$ . We used 8 attention heads, model dimension 128, prenorm, positional encodings, and trained for 150 epochs with a batch size of 100. For Raw, we used a model dimension of 128, 4 attention heads, prenorm, and a batch size of 16. We reduced the number of model layers to 4, so the model would fit on the single GPU. We trained for 100 epochs with a learning rate of 0.001 and no dropout. + +ExpRNN (Lezcano-Casado & Martínez-Rubio, 2019) For MFCC, we swept hidden sizes $\{256, 512\}$ and learning rates $\{0.001, 0.002, 0.0005\}$ . Training was run for 200 epochs, with a single layer model using a batch size of 100. For Raw, we swept hidden sizes $\{32, 64\}$ and learning rates $\{0.001, 0.0005\}$ (however, ExpRNN failed to learn). + +Lipschitz RNN (Erichson et al., 2021) For MFCC, we swept hidden sizes $\{256, 512\}$ and learning rates $\{0.001, 0.002, 0.0005\}$ . Training was run for 150 epochs, with a single layer model using a batch size of 100. For Raw, we found that LipschitzRNN was too slow to train on a single GPU (requiring a full day for 1 epoch of training alone). + +WaveGAN Discriminator (Donahue et al., 2019) The WaveGAN-D in Table 4 is actually our improved version of the discriminator network from the recent WaveGAN model for speech (Donahue et al., 2019). This CNN actually did not work well out-of-the-box, and we added several features to help it perform better. The final model is highly specialized compared to our model, and includes: + +- Downsampling or pooling between layers, induced by strided convolutions, that decrease the sequence length between layers. +- A global fully-connected output layer; thus the model only works for one input sequence length and does not work on MFCC features or the frequency-shift setting in Table 4. +- Batch Normalization is essential, whereas S4 works equally well with either Batch Normalization or Layer Normalization. +- Almost $90 \times$ as many parameters as the S4 model (26.3M vs. 0.3M). + +# D.3 GENERAL SEQUENCE MODELING + +This subsection corresponds to the experiments in Section 4.3. Because of the number of experiments in this section, we use subsubsection dividers for different tasks to make it easier to follow: CIFAR-10 density estimation Appendix D.3.1, WikiText-103 language modeling Appendix D.3.2, autoregressive generation Appendix D.3.3, sequential image classification Appendix D.3.4, and time-series forecasting Appendix D.3.5. + +# D.3.1 CIFAR DENSITY ESTIMATION + +This task used a different backbone than the rest of our experiments. We used blocks of alternating S4 layers and position-wise feed-forward layers (in the style of Transformer blocks). Each feedforward intermediate dimension was set to $2 \times$ the hidden size of the incoming S4 layer. Similar to Salimans et al. (2017), we used a UNet-style backbone consisting of $B$ identical blocks followed by a downsampling layer. The downsampling rates were 3, 4, 4 (the 3 chosen because the sequence consists of RGB pixels). The base model had $B = 8$ with starting hidden dimension 128, while the large model had $B = 16$ with starting hidden dimension 192. + +We experimented with both the mixture of logistics from (Salimans et al., 2017) as well as a simpler 256-way categorical loss. We found they were pretty close and ended up using the simpler softmax loss along with using input embeddings. + +We used the LAMB optimizer with learning rate 0.005. The base model had no dropout, while the large model had dropout 0.1 before the linear layers inside the S4 and FF blocks. + +# D.3.2 WIKITEXT-103 LANGUAGE MODELING + +The RNN baselines included in Table 7 are the AWD-QRNN (Merit et al., 2018), an efficient linear gated RNN, and the LSTM + Cache + Hebbian + MbPA (Rae et al., 2018), the best performing pure RNN in the literature. The CNN baselines are the CNN with GLU activations (Dauphin et al., 2017), the TrellisNet (Bai et al., 2019), Dynamic Convolutions (Wu et al., 2019), and TaLK Convolutions (Lioutas & Guo, 2020). + +The Transformer baseline is (Baevski & Auli, 2018), which uses Adaptive Inputs with a tied Adaptive Softmax. This model is a standard high-performing Transformer baseline on this benchmark, used for example by Lioutas & Guo (2020) and many more. + +Our S4 model uses the same Transformer backbone as in (Baevski & Auli, 2018). The model consists of 16 blocks of S4 layers alternated with position-wise feedforward layers, with a feature dimension of 1024. Because our S4 layer has around 1/4 the number of parameters as a self-attention layer + +with the same dimension, we made two modifications to match the parameter count better: (i) we used a GLU activation after the S4 linear layer (Section 3.4) (ii) we used two S4 layers per block. Blocks use Layer Normalization in the pre-norm position. The embedding and softmax layers were the Adaptive Embedding from (Baevski & Auli, 2018) with standard cutoffs 20000, 40000, 200000. + +Evaluation was performed similarly to the basic setting in (Baevski & Auli, 2018), Table 5, which involves sliding non-overlapping windows of width 1024 tokens. Other settings are reported in (Baevski & Auli, 2018) that include more context at training and evaluation time and improves the score. Because such evaluation protocols are orthogonal to the basic model, we do not consider them and report the base score from (Baevski & Auli, 2018) Table 5. + +Instead of $\mathrm{SGD + }$ Momentum with multiple cosine learning rate annealing cycles, our S4 model was trained with the simpler AdamW optimizer with a single cosine learning rate cycle with a maximum of 800000 steps. The initial learning rate was set to 0.0005. We used 8 A100 GPUs with a batch size of 8 pergpu and context size 1024. We used no gradient clipping and a weight decay of 0.1. Unlike (Baevski & Auli, 2018) which specified different dropout rates for different parameters, we used a constant dropout rate of 0.25 throughout the network, including before every linear layer and on the residual branches. + +# D.3.3 AUTOREGRESSIVE GENERATION SPEED + +Protocol. To account for different model sizes and memory requirements for each method, we benchmark generation speed by throughput, measured in images per second (Table 6) or tokens per second (Table 7). Each model generates images on a single A100 GPU, maximizing batch size to fit in memory. (For CIFAR-10 generation we limited memory to 16Gb, to be more comparable to the Transformer and Linear Transformer results reported from (Katharopoulos et al., 2020).) + +Baselines. The Transformer and Linear Transformer baselines reported in Table 6 are the results reported directly from Katharopoulos et al. (2020). Note that the Transformer number is the one in their Appendix, which implements the optimized cached implementation of self-attention. + +For all other baseline models, we used open source implementations of the models to benchmark generation speed. For the PixelCNN++, we used the fast cached version by Ramachandran et al. (2017), which sped up generation by orders of magnitude from the naive implementation. This code was only available in TensorFlow, which may have slight differences compared to the rest of the baselines which were implemented in PyTorch. + +We were unable to run the Sparse Transformer (Child et al., 2019) model due to issues with their custom CUDA implementation of the sparse attention kernel, which we were unable to resolve. + +The Transformer baseline from Table 7 was run using a modified GPT-2 backbone from the HuggingFace repository, configured to recreate the architecture reported in (Baevski & Auli, 2018). These numbers are actually slightly favorable to the baseline, as we did not include the timing of the embedding or softmax layers, whereas the number reported for S4 is the full model. + +# D.3.4 Pixel-LEVELSEQUENTIAL IMAGE CLASSIFICATION + +Our models were trained with the AdamW optimizer for up to 200 epochs. Hyperparameters for the CIFAR-10 model is reported in Table 9. + +For our comparisons against ResNet-18, the main differences between the base models are that S4 uses LayerNorm by default while ResNet uses BatchNorm. The last ablation in Section 4.3 swaps the normalization type, using BatchNorm for S4 and LayerNorm for ResNet, to ablate this architectural difference. The experiments with augmentation take the base model and train with mild data augmentation: horizontal flips and random crops (with symmetric padding). + +# D.3.5 TIME SERIES FORECASTING COMPARED TO INFORMER + +We include a simple figure (Fig. 3) contrasting the architecture of S4 against that of the Informer (Zhou et al., 2021). + +In Fig. 3, the goal is to forecast a contiguous range of future predictions (Green, length $F$ ) given a range of past context (Blue, length $C$ ). We simply concatenate the entire context with a sequence of masks set to the length of the forecast window. This input is a single sequence of length $C + F$ that + +Table 10: (Pixel-level image classification.) Citations refer to the original model; additional citation indicates work from which this baseline is reported. + +
ModelSMNISTPMNISTSCIFAR
Transformer (Vaswani et al., 2017; Trinh et al., 2018)98.997.962.2
CKConv (Romero et al., 2021)99.3298.5463.74
TrellisNet (Bai et al., 2019)99.2098.1373.42
TCN (Bai et al., 2018)99.097.2-
LSTM (Hochreiter & Schmidhuber, 1997; Gu et al., 2020b)98.995.1163.01
r-LSTM (Trinh et al., 2018)98.495.272.2
Dilated GRU (Chang et al., 2017)99.094.6-
Dilated RNN (Chang et al., 2017)98.096.1-
IndRNN (Li et al., 2018)99.096.0-
expRNN (Lezcano-Casado & Martínez-Rubio, 2019)98.796.6-
UR-LSTM99.2896.9671.00
UR-GRU (Gu et al., 2020b)99.2796.5174.4
LMU (Voelker et al., 2019)-97.15-
HiPPO-RNN (Gu et al., 2020a)98.998.361.1
UNIcoRNN (Rusch & Mishra, 2021)-98.4-
LMUFFT (Chilkuri & Eliasmith, 2021)-98.49-
LipschitzRNN (Erichson et al., 2021)99.496.364.2
S499.6398.7091.13
+ +![](images/9a0db363d2fa692a48147a8eddcbb0747cfaaabace9e3a181201c75099cdbb21.jpg) +Figure 3: Comparison of S4 and specialized time-series models for forecasting tasks. (Top Left) The forecasting task involves predicting future values of a time-series given past context. (Bottom Left) We perform simple forecasting using a sequence model such as S4 as a black box. (Right) Informer uses an encoder-decoder architecture designed specifically for forecasting problems involving a customized attention module (figure taken from Zhou et al. (2021)). + +![](images/b8f5f7364f265c1353d3d539d3ecefe1724795b82dfc7677b344de583d9be2ff.jpg) + +is run through the same simple deep S4 model used throughout this work, which maps to an output of length $C + F$ . We then use just the last $F$ features as the forecasted predictions. + +Tables 11 and 12 contain full results on all 50 settings considered by Zhou et al. (2021). S4 sets the best results on 40 out of 50 of these settings. + +
MethodsS4InformerInformer†LogTransReformerLSTMaDeepARARIMAProphet
MetricMSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAE
ETTh1240.0610.1910.0980.2470.0920.2460.1030.2590.2220.3890.1140.2720.1070.2800.1080.2840.1150.275
480.0790.2200.1580.3190.1610.3220.1670.3280.2840.4450.1930.3580.1620.3270.1750.4240.1680.330
1680.1040.2580.1830.3460.1870.3550.2070.3751.5221.1910.2360.3920.2390.4220.3960.5041.2240.763
3360.0800.2290.2220.3870.2150.3690.2300.3981.8601.1240.5900.6980.4450.5520.4680.5931.5491.820
7200.1160.2710.2690.4350.2570.4210.2730.4632.1121.4360.6830.7680.6580.7070.6590.7662.7353.253
ETTh2240.0950.2340.0930.2400.0990.2410.1020.2550.2630.4370.1550.3070.0980.2633.5540.4450.1990.381
480.1910.3460.1550.3140.1590.3170.1690.3480.4580.5450.1900.3480.1630.3413.1900.4740.3040.462
1680.1670.3330.2320.3890.2350.3900.2460.4221.0290.8790.3850.5140.2550.4142.8000.5952.1451.068
3360.1890.3610.2630.4170.2580.4230.2670.4371.6681.2280.5580.6060.6040.6072.7530.7382.0962.543
7200.1870.3580.2770.4310.2850.4420.3030.4932.0301.7210.6400.6810.4290.5802.8781.0443.3554.664
ETThm1240.0240.1170.0300.1370.0340.1600.0650.2020.0950.2280.1210.2330.0910.2430.0900.2060.1200.290
480.0510.1740.0690.2030.0660.1940.0780.2200.2490.3900.3050.4110.2190.3620.1790.3060.1330.305
960.0860.2290.1940.3720.1870.3840.1990.3860.9200.7670.2870.4200.3640.4960.2720.3990.1940.396
2880.1600.3270.4010.5540.4090.5480.4110.5721.1081.2450.5240.5840.9480.7950.4620.5580.4520.574
6720.2920.4660.5120.6440.5190.6650.5980.7021.7931.5281.0640.8732.4371.3520.6390.6972.7471.174
Weather240.1250.2540.1170.2510.1190.2560.1360.2790.2310.4010.1310.2540.1280.2740.2190.3550.3020.433
480.1810.3050.1780.3180.1850.3160.2060.3560.3280.4230.1900.3340.2030.3530.2730.4090.4450.536
1680.1980.3330.2660.3980.2690.4040.3090.4390.6540.6340.3410.4480.2930.4510.5030.5992.4411.142
3360.3000.4170.2970.4160.3100.4220.3590.4841.7921.0930.4560.5540.5850.6440.7280.7301.9872.468
7200.2450.3750.3590.4660.3610.4710.3880.4992.0871.5340.8660.8090.4990.5961.0620.9433.8591.144
ECL480.2220.3500.2390.3590.2380.3680.2800.4290.9710.8840.4930.5390.2040.3570.8790.7640.5240.595
1680.3310.4210.4470.5030.4420.5140.4540.5291.6711.5870.7230.6550.3150.4361.0320.8332.7251.273
3360.3280.4220.4890.5280.5010.5520.5140.5633.5282.1961.2120.8980.4140.5191.1360.8762.2463.077
7200.4280.4940.5400.5710.5430.5780.5580.6094.8914.0471.5110.9660.5630.5951.2510.9334.2431.415
9600.4320.4970.5820.6080.5940.6380.6240.6457.0195.1051.5451.0060.6570.6831.3700.9826.9014.264
Count2250000200
+ +Table 11: Univariate long sequence time-series forecasting results on four datasets (five cases). + +
MethodsS4InformerInformer†LogTransReformerLSTMaLSTnet
MetricMSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAE
ETTh1240.5250.5420.5770.5490.6200.5770.6860.6040.9910.7540.6500.6241.2930.901
480.6410.6150.6850.6250.6920.6710.7660.7571.3130.9060.7020.6751.4560.960
1680.9800.7790.9310.7520.9470.7971.0020.8461.8241.1381.2120.8671.9971.214
3361.4070.9101.1280.8731.0940.8131.3620.9522.1171.2801.4240.9942.6551.369
7201.1620.8421.2150.8961.2410.9171.3971.2912.4151.5201.9601.3222.1431.380
ETTh2240.8710.7360.7200.6650.7530.7270.8280.7501.5311.6131.1430.8132.7421.457
481.2400.8671.4571.0011.4611.0771.8061.0341.8711.7351.6711.2213.5671.687
1682.5801.2553.4891.5153.4851.6124.0701.6814.6601.8464.1171.6743.2422.513
3361.9801.1282.7231.3402.6261.2853.8751.7634.0281.6883.4341.5492.5442.591
7202.6501.3403.4671.4733.5481.4953.9131.5525.3812.0153.9631.7884.6253.709
ETTm1240.4260.4870.3230.3690.3060.3710.4190.4120.7240.6070.6210.6291.9681.170
480.5800.5650.4940.5030.4650.4700.5070.5831.0980.7771.3920.9391.9991.215
960.6990.6490.6780.6140.6810.6120.7680.7921.4330.9451.3390.9132.7621.542
2880.8240.6741.0560.7861.1620.8791.4621.3201.8201.0941.7401.1241.2572.076
6720.8460.7091.1920.9261.2311.1031.6691.4612.1871.2322.7361.5551.9172.941
Weather240.3340.3850.3350.3810.3490.3970.4350.4770.6550.5830.5460.5700.6150.545
480.4060.4440.3950.4590.3860.4330.4260.4950.7290.6660.8290.6770.6600.589
1680.5250.5270.6080.5670.6130.5820.7270.6711.3180.8551.0380.8350.7480.647
3360.5310.5390.7020.6200.7070.6340.7540.6701.9301.1671.6571.0590.7820.683
7200.5780.5780.8310.7310.8340.7410.8850.7732.7261.5751.5361.1090.8510.757
ECL480.2550.3520.3440.3930.3340.3990.3550.4181.4040.9990.4860.5720.3690.445
1680.2830.3730.3680.4240.3530.4200.3680.4321.5151.0690.5740.6020.3940.476
3360.2920.3820.3810.4310.3810.4390.3730.4391.6011.1040.8860.7950.4190.477
7200.2890.3770.4060.4430.3910.4380.4090.4542.0091.1701.6761.0950.5560.565
9600.2990.3870.4600.5480.4920.5500.4770.5892.1411.3871.5911.1280.6050.599
Count18560000
+ +Table 12: Multivariate long sequence time-series forecasting results on four datasets (five cases). + +# D.4 VISUALIZATIONS + +We visualize the convolutional filter $\bar{K}$ learned by S4 for the Pathfinder and CIFAR-10 tasks in Appendix D.4. + +![](images/40f1b23c0f925c3f80739d2a73ed29ed583be751b0e1d6f315d6702396962cbe.jpg) +Figure 4: (Convolutional filters on Pathfinder) A random selection of filters learned by S4 in the first layer (top 2 rows) and last layer (bottom 2 rows) of the best model. \ No newline at end of file diff --git a/efficientlymodelinglongsequenceswithstructuredstatespaces/images.zip b/efficientlymodelinglongsequenceswithstructuredstatespaces/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0794ad5a396c481fc328b6e46c53a6e73357b153 --- /dev/null +++ b/efficientlymodelinglongsequenceswithstructuredstatespaces/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d63ca021f16b03f2c3465cd0278cdb331728f30de12495d452f1fb54d1999989 +size 1560935 diff --git a/efficientlymodelinglongsequenceswithstructuredstatespaces/layout.json b/efficientlymodelinglongsequenceswithstructuredstatespaces/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..eb7c670440ed0a9d28f01fa3abd957f9b6afe579 --- /dev/null +++ b/efficientlymodelinglongsequenceswithstructuredstatespaces/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cdd1c911f59fb48e20e183efb0742402b6c1791098f811af1c051a412f744d54 +size 991118 diff --git a/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/2753219e-224a-47ce-8344-3409b7578889_content_list.json b/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/2753219e-224a-47ce-8344-3409b7578889_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7ed474eb1b60f1c81cc69eb3c07d267fc86d0d6b --- /dev/null +++ b/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/2753219e-224a-47ce-8344-3409b7578889_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6d08d27c56edcc8d2f72903b766effa1c197b64776587a8e3f8a95d6a61cdc7 +size 115397 diff --git a/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/2753219e-224a-47ce-8344-3409b7578889_model.json b/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/2753219e-224a-47ce-8344-3409b7578889_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6a850655affe1f3db49fc531fd8f9ab5542c11d7 --- /dev/null +++ b/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/2753219e-224a-47ce-8344-3409b7578889_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2fac4175d0d372da9531e67e535d039a7fe97751c329e11b11e9ea1dce3a284 +size 140769 diff --git a/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/2753219e-224a-47ce-8344-3409b7578889_origin.pdf b/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/2753219e-224a-47ce-8344-3409b7578889_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c1f752579e6a6a8be6a7234735ecfe88b36577dd --- /dev/null +++ b/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/2753219e-224a-47ce-8344-3409b7578889_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a42d366828b459a2c303370e3447ca97e3176b813984416175193202403e2f2 +size 250979 diff --git a/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/full.md b/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f53e0a32aa0e92765c57bf49a678cf63f94588fe --- /dev/null +++ b/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/full.md @@ -0,0 +1,646 @@ +# EINOPS: CLEAR AND RELIABLE TENSOR MANIPULATIONS WITH EINSTEIN-LIKE NOTATION + +Alex Rogozhnikov* + +alex.rogozhnikov@ya.ru + +# ABSTRACT + +Tensor computations underlie modern scientific computing and deep learning. A number of tensor frameworks emerged varying in execution model, hardware support, memory management, model definition, etc. However, tensor operations in all frameworks follow the same paradigm. Recent neural network architectures demonstrate demand for higher expressiveness of tensor operations. The current paradigm is not suited to write readable, reliable, or easy-to-modify code for multidimensional tensor manipulations. Moreover, some commonly used operations do not provide sufficient checks and can break a tensor structure. These mistakes are elusive as no tools or tests can detect them. Independently, API discrepancies complicate code transfer between frameworks. We propose einops notation: a uniform and generic way to manipulate tensor structure, that significantly improves code readability and flexibility by focusing on the structure of input and output tensors. We implement einops notation in a Python package that efficiently supports multiple widely used frameworks and provides framework-independent minimalist API for tensor manipulations. + +# 1 INTRODUCTION + +Deep learning (DL) over the past decade achieved a significant progress in analysis and synthesis of images, audio and text (Bengio et al., 2017; Aggarwal et al., 2018; Foster, 2019). Tools relying on these methods became a commodity in production data pipelines. + +Available research-and-production frameworks for DL, such as pytorch (Paszke et al., 2019), tensorflow (Abadi et al., 2015), mxnet (Chen et al., 2015), jax (Bradbury et al., 2018), and others, vary in numerous aspects, but their core functionality is built around efficient computations on $n$ -dimensional arrays (tensors for brevity $^{1}$ ). API exposed by frameworks for tensor operations follow the same approach, that combines high efficiency (specialized hardware can be utilized) and user convenience: computations can be expressed in high-level languages, such as Python, using a limited number of exposed and usually pre-compiled operations. + +Due to growing usage of DL in production and rising complexity of models, it becomes increasingly more important to provide programming interfaces that enable reliable and scalable development. We demonstrate that approach to define tensor operations taken by existing frameworks does not encourage writing code that is easy to interpret, maintain or modify; additionally, some of the core operations do not conduct sufficient checks and can lead to hard-to-catch mistakes. To address these problems, we propose Einstein-like notation for operations, called einops. We implement this approach in a Python (Van Rossum & Drake, 2009) package to allow simple integration of notation into existing code across a variety of frameworks. + +Outline We first briefly describe mainstream approach for tensor operations and point to its issues with examples. We review previously proposed ideas to resolve mentioned problems in related works. Our approach, einops - a verbose notation for tensor manipulation - is introduced next, followed by the code examples in case studies. We implement the notation in a Python package, and explain main design choices in implementation details section. We conclude with a discussion of einops role and common criticisms. + +# 2 MAINSTREAM APPROACH FOR TENSOR OPERATIONS + +Nowadays tensor programming is dominating in DL and playing a crucial role in scientific computing. It first appeared in APL (Iverson, 1962), was popularized by MATLAB (Matlab, 1993) and was spread in Python community by numpy (Harris et al., 2020), and now is provided by multiple frameworks. Currently, tensor manipulations in all mainstream frameworks are based on the following assumptions: + +- Tensor is an $n$ -dimensional array with shape (e.g. $4 \times 6 \times 3$ ) and data type (e.g. float32). +- When elements are matched across tensors, axes are aligned by order. Conventions regulate cases of different dimensionalities, e.g. broadcasting in Numpy (2021). +- Multiple operations act differently on axes. Those operations either have conventions about axes order (e.g. recurrent units, convolutions; convolutions expect input to be either BHWC or BCHW ordered) or should be provided with indices of axes that should be treated separately:2 + +```txt +$\mathbf{y} = \mathbf{x}$ .max(axis=1) # some operations are provided with indices y=x_swap.axes(o,1) # of specially treated axes +``` + +The mixed option is possible when operation has defaults for "special axes" that can be overridden during a call. + +Benefits of the mainstream approach are simple API and a baseline implementation, as well as versatility: a lot of research code operates solely using this kind of operations for number crunching. The drawbacks are absence of semantics in an operation and no way to reflect expected input and output. To predict output after a chain of manipulations a researcher/engineer has to carefully keep in the memory (or in the code comments) shape and contents of each intermediate tensor and thus keep track of every operation. + +In this listing a batch of images is collapsed into a single image by placing images in a row next to each other + +```txt +# im_stack has shape [b, 3, h, w] +y1 = im_stack.transpose(2, 0, 3, 1).reshape(h, b * w, 3) +y2 = im_stack.transpose(2, 3, 0, 1).reshape(h, b * w, 3) +``` + +One of $y_1, y_2$ contains a correct image, while the other is irrelevant – it requires some time to figure out which is what. A typical computation chain contains multiple tensor operations (only two in this example) and requires writing down all intermediate steps to "debug" the code. Annoyingly, resulting tensors $y_1$ and $y_2$ have the same shapes and mistake in the code may stay under the radar for a long time since the code never errors out. The lack of stronger checks is a weak point of conventional operations. + +In most cases we also cannot meaningfully visualize intermediate layers, so there is no way to narrow down searches for a problem source. Thus, a researcher/engineer has to check all the code after a failure. + +Traditional operations restrict code flexibility: any change in the shape agreements between parts of the code is hard to align and all related code fragments (frequently located in different places) should be updated simultaneously. In the next code fragment we add omitted batch dimension to the code from the vision permutator (Hou et al., 2021), and then update code to support depth: + +```python +pytorch-like code without batch dimension, as in the paper +x_h = x.reshape(H, W, N, S).permute(2, 1, 0, 3).reshape(N, W, H*S) +x_h = proj_h(x_h).reshape(N, W, H, S).permute(2, 1, 0, 3).reshape(H, W, C) +# with batch dimension +x_h = x.reshape(B, H, W, N, S).permute(0, 3, 2, 1, 4).reshape(B, N, W, H*S) +x_h = proj_h(x_h).reshape(B, N, W, H, S).permute(0, 3, 2, 1, 4).reshape(B, H, W, C) +# with batch and depth dimension +x_h = x.reshape(B, H, W, D, N, S).permute(0, 4, 2, 3, 1, 5).reshape(B, N, W, D, H*S) +x_h = proj_h(x_h).reshape(B, N, W, D, H, S).permute(0, 4, 2, 3, 1, 5).reshape(B, H, W, D, C) +``` + +Modifications are very error-prone: all indices should be recomputed and order of reshape operands should be verified. Uncommonly, this fragment is quite self-documenting since final reshape in each line hints the intended order of axes after the transposition. In DL code it is common to use x.transpose(0, 3, 1, 2) expecting other users to recognize a familiar pattern or leaving a comment. Related, transposition in the code requires operating with three contexts in mind (original order of axes, permutation, result order), and even when simplified to just permutation, it's unclear if permutation $(2 \circ 3 \circ 1)$ is inverse to $(1 \circ 3 \circ 2)$ . + +Less critical, but still notable source of mistakes is 0-based enumeration of axes (Julia, MATLAB and R use 1-based), while we propose the framework not relying on axis numeration at all. Common to refer to axis with index 0 as "first axis/dimension" (see e.g. numpy documentation), and there is no way to change this habit and avoid confusion between human communication and code. + +Finally, common tensor operations can easily break the tensor structure. For example, reshape, a common operation in the DL code, easily breaks the tensor structure because a whole tensor is treated as a sequence and no connection is assumed between axes in input and output, see Figure 1. + +![](images/0e3becd3424fcc57a68709ed294a668ce0696933a9080068ea34de2537c0a1dc.jpg) +(a) x shape $= (92,3,96)$ + +![](images/0464f026c937723dab13be385e07a0145358737219221234b53c930f09584bf3.jpg) +(b) xreshape(46, 3, -1) shape = (46, 3, 192) +Figure 1: Example of breaking the tensor structure in 3d tensor. We use non-conventional order HCW for visualizations: (a) original image; (b) partial mixing results in new colors; (c) mixing of all axes. Result shapes are shown underneath. + +![](images/ba13782fe8c7af87662f1816fca2a15da55f64b32ef039163fd65398c7250226.jpg) +(c) xreshape(48, 3, -1) shape = (48, 3, 184) + +Aforementioned implementation mistakes can result in rejecting valuable research or drawing incorrect conclusion about the data. Moreover, there is no recipe to identify incorrect implementation, because the result tensor shape and data type are not affected by an error. Even the presence of an error can be obscured: poor performance can be misattributed to a number of other factors: data, hyperparameters, method, etc. All pointed issues have catastrophic importance for research costs and timeline in a setting where one experiment requires dozens to thousands of GPU-hours. + +A number of recently proposed architectures (Wu et al., 2021; Tolstikhin et al., 2021; Jumper et al., 2021; Liu et al., 2021; Hou et al., 2021; Touvron et al., 2021) demonstrate that conservative approach with fixed ordering of axes (like BCHW for convolutional networks) is not sufficient. Existing frameworks carry this imprint that does not help with searches for new architectures. + +# 3 RELATED WORKS + +A commonly taken approach to increase reliability is assigning names/labels to tensor dimensions (below we refer to them as labeled tensors). Most known implementations are xarray (Hoyer & Hamman, 2017), labeled_tensor in tensorflow (Christiansen & Hoyer, 2016), namedtensor (2019), and named tensors in Pytorch (2019). Despite being around for many years, this idea got little adoption, and not used in the DL code: development stopped for labeled_tensor and namedtensor, and named tensors in pytorch are still experimental. + +In this approach, operations match tensor axes based on labels (common choice of label is string name), and rely on axis label instead of axes order, e.g.: + +```txt +x1 has axes (x, y, height) and x2 has axes (time, x, y) +x1 + x2 # result has axes (x, y, height, time), maybe in a different order +x1.mean('height') # reduce over axis named 'height' +``` + +Axes matching rules vary across implementations. However, we can describe some common issues why this approach did not gain wide support in the DL community: + +- Operations focus on modified axes, and neither describe input nor output; a user has to remember axes labels for each of intermediate tensors. +- Less control over data layout: order of axes may significantly influence speed (Weber & Goesele, 2017), but is not transparent. +- Names should be strictly defined and mistakes in names or their alignment between modules may result in the wrong computations, not an exception, e.g. for namedtensor package: + +```txt +x1 has axes (height, width) +# x2 has axes (h, w) +# x3 has axes (heigt) +x1 + x2 + x3 # has axes (height width h w heigt) in some order +``` + +- Adoption requires a strong buy-in decision, as all code should be axis label-centric. In contrary, transition to labeled tensors in isolated code pieces only (e.g. functions) does not prevent (more elusive) errors in interaction between functions, but introduces constant conversions between the code styles. Third-party components need wrapping to support labels. +- Symmetric and antisymmetric tensors with multiple identical axes pose an additional challenge: all axes labels should be unique to allow matching and axis specification. +- Labeled tensors have issues with integration of DL blocks (details in Appendix A). + +Proposed implementations of labeled tensors also break the common principle of decomposition in software engineering, which states that every function has its own scope with input names and names for intermediate variables. Everything that is shared between scopes should be described in the function signature. Whenever an internal structure of passed or returned entity should be shared between scopes, a type/class/interface/protocol is introduced to describe a passed argument. However, the concept of labeled tensor breaks this logic: it is assumed that called and calling functions agree on axes names, but no way to document these expectations is proposed. + +Alternative approach to increase readability and reliability of tensor-operating code is to deliberately set interface restrictions only on large neural modules such as language models, encoders or decoders, as in NeMo Kuchaiev et al. (2019). While allowing to reuse and wrap existing code to glue large components, this approach does not improve internals of the modules where problems are harder to locate and code is less documented. These improvements of high-level interfaces still have their challenges, for example a language model can expect to manipulate sequences of letters and thus expects axis "letter". However, surrounding code may try to use it for prediction of words, pixels or phonemes. Thus, relabeling of axes may be required to "connect" subsystems. + +In 2011, M. Wiebe introduced an operation einsum in numpy package. With some simplifications (absence of covariant and contravariant indices, contracted dimension may be not repeated) einsum mimics Einstein summation rule commonly used in physics. numpy.einsum stands out from the rest of numpy API and for a long time rarely was mentioned in tutorials. However, function universality and expressiveness were beneficial, and it was ported to other packages: tensorflow, pytorch, mxnet, chainer Tokui et al. (2019), etc. + +```txt +numpy.einsum('ij,jk->ik', A, B) # matrix multiplication +numpy.einsum('ijk->ij', C) # sum over last axis +numpy.einsum('ij,ji->', A, B) # trace of matrix product +``` + +einops revisits and expands this approach with an emphasis on complex tensor layouts and rearrangements, additional checks and broader functionality. In our work we try to align interface with einsum to allow smooth simultaneous usage. However, interface adjustments (such as support for multi-character axes names) are necessary. Most of our changes can be readily applied to einsum. Detailed discussion of differences between einops and einsum is given in Appendix B. + +There is an ongoing research to create languages for low-level definition of tensor operations with explicit indexing, e.g. Tensor Comprehensions (Vasilache et al., 2018) and Dex (Paszke et al., 2021). + +# 4 einops + +Einstein-Inspired Notation for OPerationS, einops, is our proposal to address problems mentioned in Section 2. The core of our approach is a new notation for transformation patterns and, in its basic view, this notation enumerates the tensor elements in one-to-one-correspondence to the set of axes. As seen in examples below, we allow number of axes to be different from tensor dimensionality. The notation uses simple syntax, formally defined in Appendix C, and is based on the following rules: + +- axis present only in the input (the left hand side) is reduced (e.g. with max-reduction) +- axis present only in the output is "repeated" (tensor values are the same for all values of new axes) +- all axis identifiers on either side of expression should be unique + +Examples of einops transformation patterns are + +```txt +'bchw->bhwcc' # transpose 'bcchw->bc' #reduce on h, w 'bc->bchw' #repeat on h, w $\begin{array}{rl}{\mathrm{\Lambda}(h1h2)(w1w2)c\rightarrow(h1w1)h2w2c'}&{\#}\end{array}$ split image to patches, stack them +``` + +Each tensor pattern ensures one-to-one mapping between element's positions in the tensor and values of axis variables. This requires all axes used in one tensor pattern to be different (thus traces, permitted by einsum, are not provided by einops). This also requires that ellipsis can be used only once within a tensor pattern. + +The main novelty of our notation is the composition and decomposition of axes denoted by parenthesis. (De)composition uses C-ordering convention, intuitively associated with digits in a number: + +```txt +$\texttt{x}$ is of shape (10, 10, 10, 10), then x[6, 2, 4, 9] == y[6249] +y = rearrange(x, 'a b c d -> (a b c d)') +``` + +Changing the rightmost of "digits" changes composed index in "small steps", while any change in leftmost results in "large steps", even when axes are not decimal: + +```perl +Rearrange pattern that composes 3 axes into one: i1 i2 i3 -> (i1 i2 i3) +# Let original array have a shape of [2, 3, 2], result has a length $2 \times 3 \times 2 = 12$ +i1 0 0 0 0 0 0 1 1 1 1 1 1 +i2 0 0 1 1 2 2 0 0 1 1 2 2 +i3 0 1 0 1 0 1 0 1 0 1 0 1 +final position 0 1 2 3 4 5 6 7 8 9 10 11 +``` + +Reverse pattern (i1 i2 i3) -> i1 i2 i3 uses the same bijection to decompose an axis into three. Since axis can be decomposed in multiple ways (e.g. 12 could be represented as $2 \times 3 \times 2$ or $1 \times 12 \times 1$ or $3 \times 1 \times 4$ , etc.), additional axis size specifications are required during decomposition. The following rule is helpful for C-ordered arrays (default ordering in all current backends): in case the order of axes does not change, result of rearrange is still a view of the original data. For example, rearrangement ' (a b c) (d e f) (g h) -> a b (c d) e (f g h)' returns a view and no copy is required. + +Axes can be referred to by their size. These anonymous axes have the same role as named, but due to the lack of name they can't be matched across different tensors. Unitary axes have a special meaning - they do not correspond to an axis variable, and thus their behavior is separate from anonymous axes. + +```txt +'h w -> h w 1' # add unitary axis +'h w -> h w 3' # repeat values along new anonymous axis +'1 h w 3 -> h w' # remove unitary axis and reduce on anonymous axis of length 3 +'... h w -> ... w h' # transpose two last dimensions +'b ... c -> (...) b c' # compose all but first and last dimensions +# and move resulting new axis to front +'b c ... -> b c' # reduce on all but first two dimensions +``` + +All anonymous axes are treated as different even if they share length. Similar to `einsum`, ellipsis works as a wildcard for zero or more axes, their number and lengths are derived from input shape(s). + +Ellipses are not named (grammar allows adding names later), thus all ellipses within a transformation pattern refer to the same group of unnamed axes. + +Pattern composition and decomposition became particularly helpful to leverage existing operations for data of higher dimensionality. E.g. if an attention function accepts tensors $\mathsf{k}$ , $\mathsf{q}$ , $\mathsf{v}$ of shape [batch, seq, channel], one can turn it into multi-head attention for 3-dimensional data by composing height, width and depth to a single dimension, and grouping head and batch dimension to ensure independent processing of attention heads: b h w d head c -> (b head) (h w d) c. Likewise, other neural blocks can be "upgraded" by rearranging inputs and outputs. + +einops provides three functions (rearrange, reduce, and repeat) which are shown below in examples (additional axes specifications are provided with \*\*karges): + +```txt +# organize 16 images in 4x4 grid +rearrange(im, '(b1 b2) h w c -> (b1 h) (b2 w) c', b1=4, b2=4) +# max-pooling with kernel of size 2x2 +reduce(im, 'b c (h h2) (w w2) -> b c h w', 'max', h2=2, w2=2) +# 2x upsampling of individual image by repeating pixels +repeat(im, 'h w c -> (h h2) (w w2) c', h2=2, w2=2, c=3) +``` + +While all patterns could be handled by a single function instead of three, we made an explicit choice to separate scenarios of "adding dimensions" (repeat), "removing dimensions" (reduce) and "keeping number of elements the same" (rearrange). This helps in producing more specific error messages when a wrong pattern is passed. + +In einsum, when an axis is present in all tensors, operation performs independently for all values of this axis, which is principle in einops. A tensor pattern only identifies correspondence between axes and element's position in tensor, but does not affect arithmetic operation; this ensures that input and output patterns can be changed independently. In particular, rearrange is an arithmetic identity (same value returned), but usage of different input and output patterns turns it into a rather universal tool for changing tensor shape/layout. + +Proposed notation addresses different problems of the mainstream approach: + +- Both input and output are described in the operation definition: tensor dimensionality and expected order of axes. This makes einops-based code more declarative and self-documenting. A user is not required to remember or infer shapes of tensors after every operation. +- Input is checked for a number of dimensions and divisibility of corresponding dimensions. The length of dimension is checked if provided. +- A tensor structure cannot be broken by design, because the notation connects input axes (or their constituents) to output axes. +- Axis enumeration is not used, so no way to make one-off mistake. +- Users do not need to compute permutation of axes, those are computed from a pattern. +- einops notation alleviates the need to track a tensor layout with patterns. +- einops and einsum "document" inputs and outputs, simplifying inference of tensor shapes from the code for other tensors that are not direct input/output of einops, but are interacting or computed using direct inputs/outputs (an example is given in Appendix E). + +We show versatility of einops by expressing common numpy (np) operations in Listing 1 + +```txt +1 np.transpose(x, [0, 3, 1, 2]) rearrange(x, 'b hwc -> b c h w') +2 np.reshape(x, rearrange(x, 'hwc -> (hw) c') +3 [x.shape[0]*x.shape[1], x.shape[2]]) +4 np.squeeze(x, o) rearrange(x, '( ) hwc -> hwc') +5 npexpand_dims(x, -1) rearrange(x, 'hwc -> hwc') +6 np.stack([r, g, b], axis=2) rearrange([r, g, b], 'chw -> hwc') +7 np concatenate([r, g, b], axis=0) rearrange([r, g, b], 'chw -> (ch) w') +``` + +```prolog +8 np flatten(x) rearrange(x, 'b t c -> (b t c)') +9 np_swap Axes(x, 0, 1) rearrange(x, 'b t c -> t b c') +10 left, right = np.split(image, 2, axis=1) rearrange(x, 'h (lr w) c -> lr h w c', lr=2) +11 even, odd = x[:, 0::2], x[:, 1::2] rearrange(x, 'h (w par) -> par h w c', par=2) +12 np.max(x, [1, 2]) reduce(x, 'b h w c -> b c', 'max') +13 np.mean(x) reduce(x, 'b h w c ->', 'mean') +14 np.mean(x, axis=(1, 2), keepdims=True) reduce(x, 'b h w c -> b () () c', 'mean') +15 np.reshape(x, [-1, 2]).max(axis=1) reduce(x, ('h 2) -> h', 'max') +16 nprepeat(x, 2, axis=1) repeat(x, 'h w -> h (w 2)') +17 nptile(x, 2, axis=1) repeat(x, 'h w -> h (2 w)') +18 nptile(x[:, :, np.newaxis], 3, axis=2) repeat(x, 'h w -> h w 3') +``` + +Listing 1: Correspondence between numpy and einops operations. + +# 5 CASE STUDIES + +We again consider fragments from the vision permutator (Hou et al., 2021). Two examples below only differ in what axis is mixed. + +```python +# vision permutator - mixing along h +x_h = x.reshape(B, H, W, N, S).permute(0, 3, 2, 1, 4).reshape(B, N, W, H*S) +x_h = proj_h(x_h).reshape(B, N, W, H, S).permute(0, 3, 2, 1, 4).reshape(B, H, W, C) +# vision permutator - mixing along w +x_w = x.reshape(B, H, W, N, S).permute(0, 1, 3, 2, 4).reshape(B, H, N, W*S) +x_w = proj_w(x_w).reshape(B, H, N, W, S).permute(0, 1, 3, 2, 4).reshape(B, H, W, C) +``` + +While four operations were updated simultaneously, in einops counterpart changes are limited to swapping h and w before and after projection, because layouts of input and output are disentangled and changes in one do not propagate to the other. And to remove e.g. batch dimension, one just removes axis b from patterns (this is also a generic property of einops). + +```python +# einops: mixing along h +x_h = rearrange(x, 'b h w (n s) -> b n w (h s)', s=S) +x_h = rearrange(proj_h(x_h), 'b n w (h s) -> b h w (n s)', s=S) +# einops: mixing along w. We swapped h and w before and after projection +x_w = rearrange(x, 'b h w (n s) -> b n h (w s)', s=S) +x_w = rearrange(proj_w(x_w), 'b n h (w s) -> b h w (n s)', s=S) +``` + +The next fragment is derived from OpenAI's implementation of Glow (Kingma & Dhariwal, 2018). As other neural flows, Glow includes multiple rearrangements. + +```python +def unsqueeze2d(x, factor=2): + assert factor >= 1 + if factor == 1: + return x + shape = x.get_shape() + height = int(shape[1]) + width = int(shape[2]) + n_channels = int(shape[3]) + assert n_channels >= 4 and n_channels % 4 == 0 + x = tf.reshape( + x, (-1, height, width, int(n_channels/factor**2), factor, factor)) + x = tf.transpose(x, [0, 1, 4, 2, 5, 3]) + x = tf.reshape(x, (-1, int(height*factor), + int(width*factor), int(n_channels/factor**2))) + return x +# same in einops, no function introduced +rearrange(x, 'b h w (c h2 w2) -> b (h h2) (w w2) c', h2=factor, w2=factor) +``` + +As for the original implementation, function name is confusing and non-descriptive. To reflect the actual transformation, a name should be changed to rearrange_by_squeeze_channels_and_unsqueeze_h_and_w. einops alleviates necessity to introduce a function, as arguments describe input, output and the transformation itself. This + +example also demonstrates how einops performs transformations, as under the hood it makes the same sequence of transformations (reshape-transpose-reshape) as in the original code. Reverse rearrangement (also used in Glow, see Appendix F) requires detailed analysis before implementation as no arguments are shared between original and reverse rearrangements. In einops a pattern can be reversed by swapping input and output parts of the pattern. + +In Appendix E we analyze and rewrite a larger fragment of code: a multi-head attention module derived from the popular implementation (Huang, 2018). + +# 6 IMPLEMENTATION DETAILS + +We implement proposed notation in a Python package einops. + +Support of multiple frameworks. einops supports a diverse set of DL frameworks (pytorch, tensorflow, chainer, jax, gluon) as well as frameworks for tensor computations: numpy, cupy (Okuta et al., 2017). We refer to them as backends. The major challenge in the support of multiple backends is the absence of common API: even simple operations like repeat, view, or transpose are defined inconsistently. + +```txt +np.transpose(x, [2, 0, 1]) # numpy +x.transpose(2, 0, 1) # numpy +tf.transpose(x, [2, 0, 1]) # tensorflow +K.permute_dimensions(x, [2, 0, 1]) # keras +mx.nd.transpose(x, (2, 0, 1)) # mxnet +x.permute(2, 0, 1) # torch +rearrange(x, 'h w t -> t h w') # einops (any backend) +``` + +This inconsistency makes projects like einops more valuable for users, as they minimize framework specifics when users do not need it. Proposal (Data-Api, 2021) may help in convergence on shared API and simplify development of cross-framework applications, including einops. + +Backend recognition in einops does not use wrapped imports, which are commonly used in Python to handle optional dependencies: + +```python +def is_numpy_tensor(x): # einops does not use this approach +try: + import numpy as np + return isinstance(x, np.ndarray) +except ImportError as _: + return False +``` + +Instead, einops keeps dictionary that maps a tensor type to a backend. When the type is not found in a dictionary, a pass through backends is done, and before importing the module, einops confirms that the backend was previously loaded in sys/modules, since einops will not receive tensors from non-imported modules. The main reasons for this strategy are: a) some backends take a lot of memory and time to load (e.g. tensorflow), which may result in an unforeseen usage of resources if a backend is installed but not used; b) it is possible that a backend is installed incorrectly (e.g. a wrong binary) and import drives to non-catchable segmentation faults; c) checking backends one-by-one incurs unnecessary overhead, which we skip by dictionary lookup. + +einops has shared logic for parsing and conversion into operations for every framework. This conversion is very efficient: every einops pattern is converted into at most 4 tensor operations by backend, even if dimensionality of tensor is large.7 Overall, einops adds marginal overhead on top of the DL frameworks, see Appendix G. + +Support for callbacks. Each backend is represented by a proxy class that provides a minimal set of operations necessary to manipulate tensors. In addition, several supplementary methods are required and implemented to allow generic testing: most of tests can be run against any backend. In addition, + +einops implements layers for backends with layers support. This codebase heavily relies on stateless operations and has some backend-specific code. + +Caching plays an important role in einops high performance (Appendix H). There are two layers of caching: i) cache for a pattern and provided axes; ii) cache for a pattern, provided axes, and input shape. When the same pattern is applied to a new shape, only shape verification and computation of unknown axes sizes is done. When a pattern is applied to input of the same shape (quite typical for iterative algorithms, and very common in DL), einops only executes sequence of commands with cached parameters. + +Exceptions are detailed to provide information about performed operation including pattern and provided axes sizes to simplify debugging. + +# 7 DISCUSSION + +Speaking of criticism, einops is sometimes described as "stringly-typed". We should point that it does not make einops less reliable compared to the mainstream approach: + +```txt +numpy.transpose: [ndarray, Tuple[int]] -> np.ndarray +rearrange: [ndarray, string] -> ndarray, (ndarray can be polymorphic) +``` + +So input and output types are not any different except for string not being tuple. In frameworks tensor types do not describe number of dimensions, and existing type system cannot set restrictions on shape components. Static analysis is identically ignorant to mistakes in patterns and e.g. in axes order. There is no static check that prevents users from passing tuple with wrong number of components, or repeats, or just ridiculously large numbers. Interestingly, numpy.einsum can accept axis indices not string pattern, but it makes operation less expressive and almost never used. + +einops does not enforce any alignment of shapes/axes across operations, thus does not provide integrated analysis/tracking of shapes. Unfortunately, there is currently no design that can accomplish this goal either: different flavors of labeled tensors remain prototypes. A successful design should satisfy multiple requirements that are rather challenging and seem to be irresolvable without a new language (or new language features). On the other hand, to become practically valuable, the system should not be too sophisticated. If resulting solution is significantly harder to integrate compared to running every module with several test inputs, it is unlikely to get a wide adoption. + +We performed an initial exploration of notation applicability in 2018 when einops was open-sourced by reimplementing a set of diverse fragments from popular DL implementations across different tasks and domains (see details in Appendix I). Since its initial release, adoption of einops steadily grows, it is currently used in more than 1000 public Github projects including the contributions from widely known AI laboratories. This important evidence (discussed in Appendix J) confirms design choices and ultimate suitability of notation for needs of research in deep learning. + +einops notation is not tied to a specific implementation or even language: it can be implemented in more efficient languages. $^{8}$ Implementation also can use lower-level primitives and optimizations provided e.g. by TVM and MLIR, not framework-provided abstractions. Both directions have a potential to boost einops performance. Alternatively, notation can be introduced to lower-level primitives. + +# 8 CONCLUSION + +We introduce einops - a minimalist notation for tensor manipulations that leverages patterns for describing structure of input and output tensors. einops mitigates a number of issues common to the conventional tensor manipulation routines, represents a number of commonly used functions with a small API, and makes code more readable and reliable. We implement notation in a Python package that provides identical API across a variety of tensor frameworks. Cross-framework support, expressiveness of code, additional checks and simple integration into existing projects make einops a convenient tool for researchers and engineers. + +# ACKNOWLEDGMENTS + +We thank M. Cramner, C. Garcia, E. Glazistova, A. Grachev, D. Ivanov, J. Henriques, S. Hoyer, T. Likhomanenko, A. Molchanov, S. Ramakrishnan and others for participating in user studies and/or discussions of notation and API. Multiple improvements to the project documentation were made by C. Garcia and other community members. +We thank Phil Wang for publishing numerous high-quality implementations of new architectures using einops and introducing practitioners to the notation by demonstrating its usage in complete projects. +We are grateful to A. Chernyavskiy for feedback on initial draft of this manuscript and to T. Likhomanenko for multiple suggestions on the paper, help during review process, and funding initial stages of this project. +We thank anonymous reviewers and program chair for discussion that helped us to improve and strengthen the paper. + +# REFERENCES + +Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaojiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org. +Charu C Aggarwal et al. Neural networks and deep learning. Springer, 10:978-3, 2018. +Yoshua Bengio, Ian Goodfellow, and Aaron Courville. *Deep learning*, volume 1. MIT press, Massachusetts, USA.: 2017. +James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. +Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274, 2015. +Eric M. Christiansen and Stephan Hoyer. tf.contrib.babeled_tensor. https://github.com/tensorflow/tensorflow/commit/ 9d20f4ea4b0b5792bf88ef886d0143b7aa780522, 2016. [Online; accessed 1-Oct-2021]. +Data-Api. Consortium for Python Data API Standards, Array API standard. https://github.com/data-apis/array-api/, 2021. [Online; accessed 1-Oct-2021]. +David Foster. *Generative deep learning: teaching machines to paint, write, compose, and play*. O'Reilly Media, 2019. +Charles R. Harris, K. Jarrod Millman, Stefan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fernandez del Río, Mark Wiebe, Pearu Peterson, Pierre Gérard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. Array programming with NumPy. Nature, 585(7825):357-362, September 2020. doi: 10.1038/s41586-020-2649-2. URL https://doi.org/10.1038/s41586-020-2649-2. + +Qibin Hou, Zihang Jiang, Li Yuan, Ming-Ming Cheng, Shuicheng Yan, and Jiashi Feng. Vision permutator: A permutable mlp-like architecture for visual recognition. arXiv preprint arXiv:2106.12368, 2021. +Stephan Hoyer and Joe Hamman. xarray: N-D labeled arrays and datasets in Python. Journal of Open Research Software, 5(1), 2017. doi: 10.5334/jors.148. URL http://doi.org/10.5334/jors.148. +Yu-Hsiang Huang. Implementation of "attention is all you need" paper. https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/2077515a8ab24f4abdda9089c502fa14f32fc5d9/transformer/SubLayers.py, 2018. [Online; accessed 1-Oct-2021]. +Kenneth E Iverson. A programming language. In Proceedings of the May 1-3, 1962, spring joint computer conference, pp. 345-351, 1962. +John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583-589, 2021. +Diederik P. Kingma and Prafulla Dhariwal. Glow, official implementation. https://github.com/openai/glow/blob/91b2c577a5c110b2b38761fc56d81f7d87f077c1/tfops.py, 2018. [Online; accessed 1-Oct-2021]. +Oleksii Kuchaiev, Jason Li, Huyen Nguyen, Oleksii Hrinchuk, Ryan Leary, Boris Ginsburg, Samuel Kriman, Stanislav Beliaev, Vitaly Lavrukhin, Jack Cook, et al. Nemo: a toolkit for building ai applications using neural modules. arXiv preprint arXiv:1909.09577, 2019. +Hanxiao Liu, Zihang Dai, David So, and Quoc Le. Pay attention to mlps. Advances in Neural Information Processing Systems, 34, 2021. +Matlab. Toolbox, symbolic math and others. Mathworks Inc, 1993. +namedtensor. namedtensor python package. https://github.com/harvardnlp/namedtensor, 2019. [Online; accessed 1-Oct-2021]. +Numpy. Numpy broadcasting rules. https://numpy.org/doc/stable/user/ basics.broadcasting.html, 2021. [Online; accessed 1-Oct-2021]. +Ryosuke Okuta, Yuya Unno, Daisuke Nishino, Shohei Hido, and Crissman Loomis. Copy: A numpy-compatible library for nvidiagpu calculations. In Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Thirty-first Annual Conference on Neural Information Processing Systems (NIPS), 2017. URL http://learningsys.org/nips17/ assets/papers/paper_16.pdf. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 8024-8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf. +Adam Paszke, Daniel D. Johnson, David Duvenaud, Dimitrios Vytiniotis, Alexey Radul, Matthew J. Johnson, Jonathan Ragan-Kelley, and Dougal Maclaurin. Getting to the point: Index sets and parallelism-preserving autodiff for pointful array programming. Proc. ACM Program. Lang., 5 (ICFP), aug 2021. doi: 10.1145/3473593. URL https://doi.org/10.1145/3473593. +Pytorch. Documentation for Pytorch named tensors. https://pytorch.org/docs/stable/named_tensor.html, 2019. [Online; accessed 1-Oct-2021]. + +Seiya Tokui, Ryosuke Okuta, Takuya Akiba, Yusuke Niitani, Toru Ogawa, Shunta Saito, Shuju Suzuki, Kota Uenishi, Brian Vogel, and Hiroyuki Yamazaki Vincent. Chainer: A deep learning framework for accelerating the research cycle. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2002-2011, 2019. +Ilya O Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Andreas Steiner, Daniel Keysers, Jakob Uszkoreit, et al. Mlp-mixer: An all-mlp architecture for vision. Advances in Neural Information Processing Systems, 34, 2021. +Hugo Touvron, Piotr Bojanowski, Mathilde Caron, Matthieu Cord, Alaaeldin El-Nouby, Edouard Grave, Armand Joulin, Gabriel Synnaeve, Jakob Verbeek, and Herve Jégou. Resmlp: Feedforward networks for image classification with data-efficient training. arXiv preprint arXiv:2105.03404, 2021. +Guido Van Rossum and Fred L. Drake. Python 3 Reference Manual. CreateSpace, Scotts Valley, CA, 2009. ISBN 1441412697. +Nicolas Vasilache, Oleksandr Zinenko, Theodoros Theodoridis, Priya Goyal, Zachary DeVito, William S Moses, Sven Verdoolaege, Andrew Adams, and Albert Cohen. Tensor comprehensions: Framework-agnostic high-performance machine learning abstractions. arXiv preprint arXiv:1802.04730, 2018. +Nicolas Weber and Michael Goesele. Matog: Array layout auto-tuning for CUDA. ACM Transactions on Architecture and Code Optimization (TACO), 14(3):1-26, 2017. +Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, and Lei Zhang. Cvt: Introducing convolutions to vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 22-31, 2021. + +# A DISCUSSION OF SUPPORT FOR COMMON DL BLOCKS IN LASELTED TENSORS + +While most of labeled tensors packages were targeted at deep learning, none of them proposed integration of DL-specific blocks like convolutions. For illustrative purpose let us discuss different possible scenarios how labeled axes could be added to convolution: + +- Should convolution stick to existing behavior and rely on axes order not names? If so, we get no additional checks, but create potential problems since axes names are required in other operations (e.g. reductions), but users should drop the names before every convolution, and reorder axes if necessary. Next, users have to set axes for convolution output, in agreement with input axes labels. +- Should convolution rely on axes names? If so, should axes names be fixed? For instance, convolution may require input to have dimensions ("channel", "height", "width") while all the other dimensions could be considered "batch-like" and preserved in the output? In this case, almost all the tensors will have the same axes names, thus name checks become meaningless. If one decides to apply attention or e.g. recurrent unit, each switch would require renaming axes. +- Should output names be predefined? Thus all tensors produced by convolutions would have the same labels, still preventing meaningful checks. That seems to produce more potential confusion than help. +- Should input axes names be propagated to output? If yes – which names should be propagated? Output channels at first seem to be disconnected from input ones. However, in practice opposite is equally frequent: in depth-wise convolution each input has one-to-one correspondence with output. Setting up common rules for propagation of “height” and “width” labels meets similar questions and exceptions: compare padding='same' and padding='valid'. + +Similar questions arise for other "building blocks" of DL. + +The broader problem is to come up with a concept of axis "label" that is indeed helpful in development, i.e. sets restrictions that help in detecting or preventing coding errors, while introducing limited performance and coding overhead. + +All design choices taken (including support of various DL blocks) should be in agreement with each other, guided by some high-level logic that works for multiple models and applications. This is hardly feasible without a well-defined concept of axis "label", which was not proposed by implementations. + +# B EINOPS AND EINSUM + +einops notation is largely inspired by numpy.einsum and extends it for single-tensor transformations: in Listing 1 only 2 operations of 17 (transpose and swap axes) can be implemented with einsum. We design interfaces in einops to be aligned with numpy.einsum for simultaneous usage. In the case study of multi-head attention, Appendix E, we demonstrate how they should interact together. Overall, einops implementation deviates from numpy.einsum in the following: + +- einops supports arbitrary reductions (max-, min-, sum-, mean-, logaddexp-reductions as well as callables to support any other reduction) while $\text{einsum}$ can be used to perform sum-reduction only. +- While einsum allows only one-character names for axes, einops supports multi-character ones: digits, underscore, and arbitrary capitalization are allowed, while space is used as a delimiter. For example, in einsum pattern 'hw c -> c hw' transposes 3-dimensional tensor, while this is an operation on 2-dimensional tensors in einops. +- While einsum implementations suffer from non-aligned support for spaces between names and capitals, einops notation is backend-independent. + +In addition to these discrepancies, we introduce several new concepts in einops which are absent in numpy.einsum: + +- Composition and decomposition of axes, a new core functionality described in Section 4. + +- Unitary axes: in any pattern a user can use 1 to reflect axes of size one. +- Specification of axes size and verification of shapes and divisibility. +- Anonymous axes: axes that are present only on lhs or rhs can be specified with just size in the pattern. For example: repeat(x, 'h w -> h w c', c=3) can be repeat(x, 'h w -> h w 3'). Note that unitary axes are not a sub-case of anonymous axes. +- Introduction of new axes with einopsrepeat by repeating elements along the axis. We demonstrate that this operation allows einops to support repeats and tiles. +- List inputs. einops accepts a list of tensors with identical shapes and dtypes, which are stacked. For example, rearrange([r, g, b, alpha], 'channel h w -> h w channel'). This extends einops to support tensor stacking and concatenations. +- Layers. einops functions have layer counterparts, e.g. einops.layers.torch.Rearrange('b c h w -> b (c h w)') can be used inside Sequential as one of the modules.9 + +and at the same time we do not implement some features of numpy.einsum: + +- Axes repeats on lhs. While `einsum` allows writing e.g. 'ii->' to get a matrix trace we dropped this in `einops` to prevent coding mistakes. +- Multi-tensor transformations. While there is an einops layer that operates on two tensors, the core functionality of einops used by most users is one-tensor transformations. However, einops novelties (new axes, anonymous axes, unitary axes, specification of axes sizes) can readily be applied to extend einsum. +- Implicit indexing. This feature relies on characters sorting, does not encourage descriptive axes names as those define output axes order, and makes output implicit, not explicit. Implicit indexing also cannot be applied to einops.reduce and einopsrepeat. There is a negligible usage $(< 1\%)$ of list-inputs syntax in numpy.einsum, and it is not added to einops. + +# C FORMAL GRAMMAR OF EINOPS PATTERNS + +To accommodate differences, covered in appendix B, but still keep similarity with numpy.einsum, einops patterns follow a formal grammar defined in Listing 2. This grammar significantly evolved from the prototype stage of einops notation, when we performed a user study described in Appendix D. + +```txt +transformation_pattern = tensor_pattern, '->', tensor_pattern; +tensor_pattern = [delIMITER], axis_group,{'', 'delIMITER, axis_group}, [delIMITER]; +axis_group = axis | ["", [delIMITER], ]) | ([delIMITER], axis, {delIMITER, axis}, [delIMITER]); +axis = ['...'] | unitary_axis | anonymous Axes | named_axis; +unitary_axis = '1'; +delIMITER = ', {''}; +named_axis = ; +anonymous_axis = ; +``` + +Listing 2: Extended Backus-Naur Form grammar of einops pattern notation for a single tensor. + +# D CONFIRMATION OF READABILITY AND INTUITIVENESS + +At the time of the first einops prototype, the idea of using operations with named axes tied to operation itself not to a tensor was not around within DL. Besides, among DL frameworks only pytorch had reasonable support for einsum (with multiple performance issues reported). + +That is why at the prototype stage we developed a test to probe several design choices and to answer the following questions: + +- Is the notation easy to pick up? + +- Does the notation require any specific introduction? +- Can users recognize operations without a context? + +Specifically, 8 subjects with sufficient (>1 year) exposure to tensor programming in at least one of the DL frameworks were asked to complete the following questionnaire: subjects were not provided with any description of function, or implementation, or other context.[10] + +```coffeescript +All imports are ignored. What is the output of each print statement? +x = torch.zeros(10, 20, 30, 40) +y = transpose(x, 'bhwc->bchw') +print(y.shape) # Q1 +y = transpose(x, 'bhwc->bc(hw)) +print(y.shape) # Q2 +y = transpose(x, 'bhwc(c,h1,w1)->b(h,h1)(w,w1)c', h1=2, w1=2) +print(y.shape) # Q3 +y = transpose(x, 'b(h,h1)(w,w1)c->bhw(h1w1c)', h1=2, w1=2) +print(y.shape) # Q4 +y1, y2 = transpose(x, 'bhwc(c,g)->gbhwc', g=2) +print(y1.shape, y2.shape) # Q5 +y = transpose(x, 'b1sb2t->b1b2st') +print(y.shape) # Q6 +# operator @ is matrix multiplication +t = transpose(x, 'bchw->(bhw)c') @ torch rand(20,50) +print(t.shape) # Q7 +y = transpose(t, '(bhwc2->bc2hw', b_hw=x.shape) # first operand is t +print(y.shape) # Q8 +y = transpose(t, '(bhwc2->bc2hw', b=30, h=10) +print(y.shape) # Q9 +``` + +This user study had some additional ideas that were rejected later: comma as a delimiter within parenthesis (non-parenthesized commas conflict with numpy.einsum) and parsing of multiple shape arguments (e.g. b_hw=x.shape). Even some participants who correctly answered corresponding questions (5 participants of 8 in both cases) reported they had initial confusion with these features. + +Only two participants were aware about $\mathrm{einsum}$ function but not used that in their coding practice. Questions (Q1, Q2, Q6, Q7, Q9) were correctly answered by all participants. This confirms that notation based on named axes is intuitive and can be picked up without a special introduction. Post-questionnaire interview showed that participants could describe a particular context when an operation (e.g. Q1, Q3) could be used. + +While single-character variables names (with an optional digit) did not cause confusion, we have replaced this due to conflicts with potential future features (e.g. unitary axes). Function name transpose was also changed to rearrange to avoid conflicts with existing operations in different DL frameworks. + +However the real value of design choices in einops can be confirmed in a long time-scale and within large codebases, settings that can't be captured by small user studies. + +# E CASE STUDY: MULTI-HEAD ATTENTION + +Below we provide a shortened implementation of multi-head attention based on Huang (2018). For brevity, we remove weights initialization and mask support. + +```python +class ScaledDotProductAttention(nnModule): def __init__(self, temperature, attn_dropout=0.1): super().__init_( ) self.temperature = temperature self_dropout = nn_dropout(attn_dropout) self.softmax = nn.Softmax(dim=2) def forward(self, q, k, v): attn = torch.bmm(q, k.transpose(1, 2)) / self.temperature +attn = self.softmax(attn) +attn = self_dropout(attn) +output = torch.bmm(attn, v) +return output, attn +class MultiHeadAttention(nnModule): def __init__(self, n_head, d_model, d_k, d_v, dropout $= 0.1$ .. super().__init_( ) self.n_head = n_head self.d_k = d_k self.d_v = d_v self.w_qs = nn.Linear(d_model, n_head * d_k) self.w_ks = nn.Linear(d_model, n_head * d_k) self.w.vs = nn.Linear(d_model, n_head * d_v) self attendsion = ScaledDotProductAttention(temperature $\equiv$ np.power(d_k, 0.5)) self(layer_norm = nn.layersNorm(d_model) +self.fc = nn.Linear(n_head * d_v, d_model) +self_dropout = nn_dropout Dropout) +def forward(self, q, k, v): d_k, d_v, n_head = self.d_k, self.d_v, self.n_head sz_b, len_q, _ = q.size() sz_b, len_k, _ = k.size() sz_b, len_v, _ = v.size() residual $= q$ q = self.w_qs(q).view(sz_b, len_q, n_head, d_k) k = self.w_ks(k).view(sz_b, len_k, n_head, d_k) v = self.w.vs(v).view(sz_b, len_v, n_head, d_v) q = q.permute(2, o, 1, 3).contiguous().view(-1, len_q, d_k) # (n*b) x lq x dk k = k.permute(2, o, 1, 3).contiguous().view(-1, len_k, d_k) # (n*b) x lk x dk v = v.permute(2, o, 1, 3).contiguous().view(-1, len_v, d_v) # (n*b) x lv x dv output, attn = selfattention(q, k, v) output = output.view(n_head, sz_b, len_q, d_v) output = output.permute(1, 2, o, 3).contiguous().view(sz_b, len_q, -1) # b x lq x (n*dv) output = self_dropout(self.fc(output)) output = self(layer_norm(output + residual) return output, attn +``` + +Original implementation demonstrates a number of issues that we previously discussed: unchecked and hard-to-track transformations, necessity to keep comments about the shape. Frequent usage of -1 in reshapes makes it simple to introduce errors. The critical part of computations (attention) is offloaded to a separate module, which does not check input and which is not documented: dimensions of inputs and outputs are not defined, just as the connection between them. Thus, provided ScaledDotProductAttention cannot be considered as an independent, self-containing or reusable module. + +Inlining of ScaledDotProductAttention inside MultiHeadAttention could improve the situation, but would also make the problem with shapes more obvious, as more comments would be required for inlined variables. + +We can compare that with einops implementation, where all computations are done in a single module. Each axis is easy to track throughout the code. An axis index is needed three times (explicitly in softmax, implicitly in fc and layer_norm), but those are easy to find from the code without any additional comments – which demonstrates how einops “implicitly annotates” code in the proximity. + +```python +1 class MultiHeadAttentionNew(nnModule): +2 def __init__(self, n_head, d_model, d_k, d_v, dropout=0.1): +3 super().__init__() +4 self.n_head = n_head +5 +6 self.w_qs = nn.Linear(d_model, n_head * d_k) +7 self.w_ks = nn.Linear(d_model, n_head * d_k) +8 self.w.vs = nn.Linear(d_model, n_head * d_v) +9 self.fc = nn.Linear(n_head * d_v, d_model) +10 +11 self.dropout = nn.Dropout(p=dropout) +12 self.attn_dropout = nn.Dropout(p=0.1) +13 self(layer_norm = nn.layersNorm(d_model) +14 +15 def forward(self, q, k, v): +16 residual = q +17 q = rearrange(self.w_qs(q), 'b l (h k) -> h b l k', h=self.n_head) +18 k = rearrange(self.w_ks(k), 'b t (h k) -> h b t k', h=self.n_head) +19 v = rearrange(self.w.vs(v), 'b t (h v) -> h b t v', h=self.n_head) +20 attn = torch.einsum('hblk,hbtk->hblt', [q, k]) / np.sqrt(q.shape[-1]) +21 attn = self.attn_dropout(attention softmax(dim=-1)) +22 output = torch.einsum('hblt,hbtv->hblv', [attn, v]) +23 output = rearrange(output, 'h b l v -> b l (h v)') +24 output = self.dropout(self.fc(output)) +25 output = self(layer_norm(output + residual) +26 return output, attn +``` + +# F SQUEEZE EXAMPLE FROM GLOW WITH REVERSE TRANSFORMATION + +Squeeze and unsqueeze implementations are derived from OpenAI's implementation of Glow (Kingma & Dhariwal, 2018). + +```python +def squeeze2d(x, factor=2): + assert factor >= 1 + if factor == 1: + return x + shape = x.get_shape() + height = int(shape[1]) + width = int(shape[2]) + n_channels = int(shape[3]) + assert height % factor == 0 and width % factor == 0 + x = tf.reshape(x, [-1, height//factor, factor, width//factor, factor, n_channels]) + x = tf.transpose(x, [0, 1, 3, 5, 2, 4]) + x = tf.reshape(x, [-1, height//factor, width // factor, n_channels*factor*factor]) + return x +def unsqueeze2d(x, factor=2): + assert factor >= 1 + if factor == 1: + return x + shape = x.get_shape() + height = int(shape[1]) + width = int(shape[2]) + n_channels = int(shape[3]) + assert n_channels >= 4 and n_channels % 4 == 0 + x = tf.reshape( x, (-1, height, width, int(n_channels/factor**2), factor, factor)) + x = tf.reshape(x, [0, 1, 4, 2, 5, 3]) + x = tfreshape(x, (-1, intheight*factor), + int(width*factor), int(n_channels/factor**2))) + return x +``` + +einops counterparts for both functions can be written as + +```txt +rearrange(x, 'b (c h2 w2) h w -> b c (h h2) (w w2)', h2=factor, w2=factor) +rearrange(x, 'b c (h h2) (w w2) -> b (c h2 w2) h w', h2=factor, w2=factor) +``` + +where one can readily see from the patterns that the second operation is inverse to the first one. + +# G EINOPS PERFORMANCE + +To demonstrate that the overhead brought by einops on top of DL framework is negligible we measure performance of several case studies. We compare original pytorch implementations and their einops versions in the following scenarios, see Table 1: CPU or CUDA backends, with enabled or disabled JIT (just-in-time compilation), different input tensor sizes. In our performance benchmark we use einops 0.3.2, pytorch 1.7.1+cu110, CUDA 11.0. We use AWS EC2 p3.2xlarge instance for benchmarks. The average time of the forward pass is measured in milliseconds by IPython's module timeit. For multi-head attention case study, see Appendix E, einops implementation uses pytorch einsum operation which gives additional overhead. That is why, for this case study we consider einops implementation with and without einsum. For the case of unsqueeze, we use popular open-source port to pytorch https://github.com/chaiyujin/glow-pytorch. + +From these benchmarks even einops-rich code works on a similar range of speeds as pytorch-only. In particular, unsqueeze2d consists only of einops operation, shows similar speed both on CPU for various input sizes and GPU with larger input sizes. Since computationally most expensive operations are convolutions and tensor products (linear layers, attention), not rearrangements, this overhead will typically have marginal contribution to the result. Thus, attention and permutator are closer to + +practical use cases, and demonstrate that difference in performance becomes notable only for small inputs when GPU is used. + +Table 1: Performance comparison between original pytorch implementation and its einops version for different case studies. Performance is measured in ms via IPython's timeit module. + +
Case StudyInput SizeImpl.CPU BackendCUDA Backend
w/o JITJITw/o JITJIT
attention(32, 64, 512)einops21.85±0.0421.13±0.031.30±0.001.05±0.00
w/o einsum21.31±0.0420.99±0.061.14±0.000.90±0.00
original21.09±0.0320.85±0.051.02±0.000.71±0.00
(32, 128, 512)einops48.06±0.0647.37±0.051.39±0.001.30±0.00
w/o einsum46.94±0.0446.28±0.051.28±0.001.29±0.00
original46.87±0.0846.33±0.041.29±0.001.28±0.00
(32, 256, 512)einops150.40±0.18149.53±0.172.78±0.002.79±0.00
w/o einsum146.01±0.20144.96±0.192.75±0.002.76±0.00
original145.19±2.07145.94±0.212.75±0.002.75±0.00
(32, 512, 512)einops495.31±1.19493.90±0.606.51±0.006.53±0.01
w/o einsum488.37±0.93487.16±0.966.47±0.006.48±0.01
original491.20±2.79488.64±0.236.45±0.016.46±0.00
permutator(32, 32, 32, 32)einops7.05±0.017.03±0.010.52±0.000.52±0.00
original7.08±0.027.05±0.020.44±0.000.40±0.00
(64, 64, 64, 64)einops237.53±1.67237.05±0.242.51±0.002.56±0.00
original234.57±0.17234.66±0.212.50±0.012.54±0.00
unsqueeze2d(32, 32, 32, 32)einops1.68±0.001.73±0.000.07±0.000.13±0.00
original1.66±0.001.72±0.000.05±0.000.09±0.00
(32, 64, 64, 64)einops16.91±0.0217.01±0.040.14±0.000.19±0.00
original16.89±0.0317.14±0.020.12±0.000.14±0.00
(32, 128, 128, 128)einops129.53±0.05129.95±0.100.71±0.000.76±0.00
original129.16±0.16131.44±0.120.69±0.000.72±0.00
+ +# H ROLE OF CACHING + +To estimate the role of caching we conduct a study with very small numpy arrays, where we generate $10^{4}$ patterns performing the same transformation using IPython's timeit functionality. Cache can't accommodate all these patterns and runs syntax parsing, validation and inference of dimensions. + +```matlab +from einops import rearrange +import numpy as np +patterns = [f'i{i} (j k) l m ... -> i{i} j (k l) (m ...)' for i in range(10_000) +] +x = np.zeros([2, 2, 2, 2, 2]) # small tensor +# caching mechanism is not used +%%timeit +for pattern in patterns: rearrange(x, pattern, j=2) +# caching is in use +%%timeit +pattern = patterns[0] +for _pattern in patterns: rearrange(x, pattern, j=2) +``` + +Outputs for the two cases show more than 10-fold difference: + +- without caching: $663\mathrm{ms}\pm 20.9\mathrm{ms}$ per loop (mean $\pm$ std. dev. of 7 runs) +with caching: $37.1\mathrm{ms}\pm 165~\mu \mathrm{s}$ per loop (mean $\pm$ std.dev.of 7 runs) + +Times were measured on Macbook Pro 2018 using Python 3.9.0, numpy 1.20.3 and einops 0.3.2. + +# I EINOPS FLEXIBILITY AND APPLICABILITY + +To confirm wide applicability of einops, we search for Github code in pytorch that actively relies on view/reshape/transpose/permute. We restrictively consider only popular (>1000 stars) repositories. Official documentation, examples and torchvision are also included (all three repositories pass the previous criterion). We sample 16 fragments that cover a diverse set of applications and architectures and also smaller fragments from the same repositories, see Table 2. + +Table 2: Comparison between original implementation and rewritten version with einops measured in number of code lines and characters. + +
Fragment# Lines# Characters
OriginaleinopsOriginaleinops
LeNet-like network1915626352
Super-resolution1711678478
Style transfer63192107
LSTM and language modeling1516745753
LSTM token-by-token31221266990
ShuffleNet1264340071932
ResNet584221061658
FastText248723281
CNN for text classification43161767799
Tacotron3623818662
Transformer's attention783226371538
Self-attention GANs34191506959
Time-sequence prediction272511251015
Spacial transformer network362710991012
Highway convolutions66256239
GLOW305917250
+ +These examples are rewritten in einops and torch.einsum: interface of models is kept identical to the original implementation to demonstrate that einops does not demand global code adjustments. In addition, introduction of einops layers allows to avoid new classes and to reuse nnSequential in four cases (LeNet, Super-resolution, FastText, ResNet). + +Resulting code in einops is shorter, see Table 2, while we use fewer method chains and do not pack multiple operations in a single line. Out of 16 fragments, in one case length (measured in number of code lines or in number of characters – results are identical) is higher than in original (LSTM and language modeling), in one case is identical (Highway convolutions), and in all other cases is tangibly lower.[11] + +# J EINOPS ADOPTION + +When it comes to estimating suitability of notation (or any other tool) to help in research, one can try to dissect this question into multiple small questions: how many different cases are covered by notation? can it be implemented? can it be integrated with existing tools? is resulting code shorter? how fast is resulting code? is modification with notation easier or faster? While we cover in our studies multiple such factors, this "dissection" does not provide holistic view on role and integration into existing workflow and tooling. + +einops has taken a "test by time" approach. Developed, implemented and open-sourced in 2018, it got adoption in public projects in majority of largest AI laboratories, which is a strong indicator of novelty, usefulness and reliability. For context, pytorch claims to be user-focused Paszke et al. (2019). In keynote talk12 they confirm poor relevance of micro-benchmarks and rely on adoption as a (lagging) confirmation of design choices. + +As of November, 2021, Github reports more than 1000 usages of einops by other repositories with increasing frequency (last 100 were published in 18 days). In the majority of cases they implement approaches or DL models that were proposed after release of einops. Implementations use different DL backends and span different modalities (even non-traditional applications like fluid dynamics). + +This inductive validation (if it was applicable to a large number of new cases, it should work in more) is extremely time-inefficient, but provides the most reliable proof. einops is used in open source projects from AI laboratories, who use and maintain competing deep learning stacks. This forms another important (though less statistically reliable) indication for adoption. \ No newline at end of file diff --git a/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/images.zip b/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4fd0f1919e2a70a883dd545df691df5b7788ca7f --- /dev/null +++ b/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:430eef5176f94450d3ffa349c697e3fe9a30adbeacd47ea85ad864acba5390d9 +size 230535 diff --git a/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/layout.json b/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3f48290264c70b818c9c38e5b88d295e1c0358d5 --- /dev/null +++ b/einopsclearandreliabletensormanipulationswitheinsteinlikenotation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35640a8fa9bdd5de2cff9c80d58c9a816f7641012209d35cf43e7042f6fe49d1 +size 482981 diff --git a/expressivenessandapproximationpropertiesofgraphneuralnetworks/11fdbbc8-bbae-45b0-90cb-1eb46095c3dc_content_list.json b/expressivenessandapproximationpropertiesofgraphneuralnetworks/11fdbbc8-bbae-45b0-90cb-1eb46095c3dc_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1c59ade9342c88b694be1eb652b67e0752739c50 --- /dev/null +++ b/expressivenessandapproximationpropertiesofgraphneuralnetworks/11fdbbc8-bbae-45b0-90cb-1eb46095c3dc_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1037ed57bf244ec701b416d7414e9c3ea184d44cb91d4e449b405e31fec94985 +size 355122 diff --git a/expressivenessandapproximationpropertiesofgraphneuralnetworks/11fdbbc8-bbae-45b0-90cb-1eb46095c3dc_model.json b/expressivenessandapproximationpropertiesofgraphneuralnetworks/11fdbbc8-bbae-45b0-90cb-1eb46095c3dc_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f4803eb31a9db55b1beb5bcb1ec883830af6b64f --- /dev/null +++ b/expressivenessandapproximationpropertiesofgraphneuralnetworks/11fdbbc8-bbae-45b0-90cb-1eb46095c3dc_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0931cce8671e80e5a78a4b4849aa7c1254d984756f18ebb586a8351bcbb9fc5a +size 406699 diff --git a/expressivenessandapproximationpropertiesofgraphneuralnetworks/11fdbbc8-bbae-45b0-90cb-1eb46095c3dc_origin.pdf b/expressivenessandapproximationpropertiesofgraphneuralnetworks/11fdbbc8-bbae-45b0-90cb-1eb46095c3dc_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9f0ace83aa642941e15c68d6a4ee4964e2de9fda --- /dev/null +++ b/expressivenessandapproximationpropertiesofgraphneuralnetworks/11fdbbc8-bbae-45b0-90cb-1eb46095c3dc_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b495aa675c43f825242183b5527e77a0c68fb8c510ee0677fcf92980a2edbfc9 +size 827686 diff --git a/expressivenessandapproximationpropertiesofgraphneuralnetworks/full.md b/expressivenessandapproximationpropertiesofgraphneuralnetworks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f9c3bd973390f55ff799ca37629d6273a4cbf525 --- /dev/null +++ b/expressivenessandapproximationpropertiesofgraphneuralnetworks/full.md @@ -0,0 +1,1425 @@ +# EXPRESSIONIVENESS AND APPROXIMATION PROPERTIES OF GRAPH NEURAL NETWORKS + +Floris Geerts + +Department of Computer Science, University of Antwerp, Belgium + +floris.geerts@uantwerpen.be + +Juan L. Reutter + +School of Engineering, Pontificia Universidad Catolica de Chile, Chile &IMFD,Chile + +jreutter@ing.puc.cl + +# ABSTRACT + +Characterizing the separation power of graph neural networks (GNNs) provides an understanding of their limitations for graph learning tasks. Results regarding separation power are, however, usually geared at specific GNN architectures, and tools for understanding arbitrary GNN architectures are generally lacking. We provide an elegant way to easily obtain bounds on the separation power of GNNs in terms of the Weisfeiler-Leman (WL) tests, which have become the yardstick to measure the separation power of GNNs. The crux is to view GNNs as expressions in a procedural tensor language describing the computations in the layers of the GNNs. Then, by a simple analysis of the obtained expressions, in terms of the number of indexes and the nesting depth of summations, bounds on the separation power in terms of the WL-tests readily follow. We use tensor language to define Higher-Order Message-Passing Neural Networks (or $k$ -MPNNs), a natural extension of MPNNs. Furthermore, the tensor language point of view allows for the derivation of universality results for classes of GNNs in a natural way. Our approach provides a toolbox with which GNN architecture designers can analyze the separation power of their GNNs, without needing to know the intricacies of the WL-tests. We also provide insights in what is needed to boost the separation power of GNNs. + +# 1 INTRODUCTION + +Graph Neural Networks (GNNs) (Merkwirth & Lengauer, 2005; Scarselli et al., 2009) cover many popular deep learning methods for graph learning tasks (see Hamilton (2020) for a recent overview). These methods typically compute vector embeddings of vertices or graphs by relying on the underlying adjacency information. Invariance (for graph embeddings) and equivariance (for vertex embeddings) of GNNs ensure that these methods are oblivious to the precise representation of the graphs. + +Separation power. Our primary focus is on the separation power of GNN architectures, i.e., on their ability to separate vertices or graphs by means of the computed embeddings. It has become standard to characterize GNN architectures in terms of the separation power of graph algorithms such as color refinement (CR) and $k$ -dimensional Weisfeiler-Leman tests ( $k$ -WL), as initiated in Xu et al. (2019) and Morris et al. (2019). Unfortunately, understanding the separation power of any given GNN architecture requires complex proofs, geared at the specifics of the architecture. We provide a tensor language-based technique to analyze the separation power of general GNNs. + +Tensor languages. Matrix query languages (Brijder et al., 2019; Geerts et al., 2021b) are defined to assess the expressive power of linear algebra. Balcilar et al. (2021a) observe that, by casting various GNNs into the MATLAB (Brijder et al., 2019) matrix query language, one can use existing separation results (Geerts, 2021) to obtain upper bounds on the separation power of GNNs in terms of 1-WL and 2-WL. In this paper, we considerably extend this approach by defining, and studying, a new general-purpose tensor language specifically designed for modeling GNNs. As in Balcilar et al. (2021a), our focus on tensor languages allows us to obtain new insights about GNN architectures. + +First, since tensor languages can only define invariant and equivariant graph functions, any GNN that can be cast in our tensor language inherits these desired properties. More importantly, the separation power of our tensor language is as closely related to CR and $k$ -WL as GNNs are. Loosely speaking, if tensor language expressions use $k + 1$ indices, then their separation power is bounded by $k$ -WL. Furthermore, if the maximum nesting of summations in the expression is $t$ , then $t$ rounds of $k$ -WL are needed to obtain an upper bound on the separation power. A similar connection is obtained for CR and a fragment of tensor language that we call "guarded" tensor language. + +We thus reduce problem of assessing the separation power of any specific GNN architecture to the problem of specifying it in our tensor language, analyzing the number of indices used and counting their summation depth. This is usually much easier than dealing with intricacies of CR and $k$ -WL, as casting GNNs in our tensor language is often as simple as writing down their layer-based definition. We believe that this provides a nice toolbox for GNN designers to assess the separation power of their architecture. We use this toolbox to recover known results about the separation power of specific GNN architectures such as GINs (Xu et al., 2019), GCNs (Kipf & Welling, 2017), Folklore GNNs (Maron et al., 2019b), $k$ -GNNs (Morris et al., 2019), and several others. We also derive new results: we answer an open problem posed by Maron et al. (2019a) by showing that the separation power of Invariant Graph Networks ( $k$ -IGNs), introduced by Maron et al. (2019b), is bounded by ( $k - 1$ )-WL. In addition, we revisit the analysis by Balcilar et al. (2021b) of ChebNet (Defferrard et al., 2016), and show that CayleyNet (Levie et al., 2019) is bounded by 2-WL. + +When writing down GNNs in our tensor language, the less indices needed, the stronger the bounds in terms of $k$ -WL we obtain. After all, $(k - 1)$ -WL is known to be strictly less separating than $k$ -WL (Otto, 2017). Thus, it is important to minimize the number of indices used in tensor language expressions. We connect this number to the notion of treewidth: expressions of treewidth $k$ can be translated into expressions using $k + 1$ indices. This corresponds to optimizing expressions, as done in many areas in machine learning, by reordering the summations (a.k.a. variable elimination). + +Approximation and universality. We also consider the ability of GNNs to approximate general invariant or equivariant graph functions. Once more, instead of focusing on specific architectures, we use our tensor languages to obtain general approximation results, which naturally translate to universality results for GNNs. We show: $(k + 1)$ -index tensor language expressions suffice to approximate any (invariant/equivariant) graph function whose separating power is bounded by $k$ -WL, and we can further refine this by comparing the number of rounds in $k$ -WL with the summation depth of the expressions. These results provide a finer picture than the one obtained by Azizian & Lelarge (2021). Furthermore, focusing on "guarded" tensor expressions yields a similar universality result for CR, a result that, to our knowledge, was not known before. We also provide the link between approximation results for tensor expressions and GNNs, enabling us to transfer our insights into universality properties of GNNs. As an example, we show that $k$ -IGNs can approximate any graph function that is less separating than $(k - 1)$ -WL. This case was left open in Azizian & Lelarge (2021). + +In summary, we draw new and interesting connections between tensor languages, GNN architectures and classic graph algorithms. We provide a general recipe to bound the separation power of GNNs, optimize them, and understand their approximation power. We show the usefulness of our method by recovering several recent results, as well as new results, some of them left open in previous work. + +Related work. Separation power has been studied for specific classes of GNNs (Morris et al., 2019; Xu et al., 2019; Maron et al., 2019b; Chen et al., 2019; Morris et al., 2020; Azizian & Lelarge, 2021). A first general result concerns the bounds in terms of CR and 1-WL of Message-Passing Neural Networks (Gilmer et al., 2017; Morris et al., 2019; Xu et al., 2019). Balcilar et al. (2021a) use the MATLAB matrix query language to obtain upper bounds on the separation power of various GNNs. MATLAB can only be used to obtain bounds up to 2-WL and is limited to matrices. Our tensor language is more general and flexible and allows for reasoning over the number of indices, treewidth, and summation depth of expressions. These are all crucial for our main results. The tensor language introduced resembles sum-MATLANG (Geerts et al., 2021b), but with the added ability to represent tensors. Neither separation power nor guarded fragments were considered in Geerts et al. (2021b). See Section A in the supplementary material for more details. For universality, Azizian & Lelarge (2021) is closest in spirit. Our approach provides an elegant way to recover and extend their results. Azizian & Lelarge (2021) describe how their work (and hence also ours) encompasses previous works (Keriven & Peyré, 2019; Maron et al., 2019c; Chen et al., 2019). Our results use connections between $k$ -WL and logics (Immerman & Lander, 1990; Cai et al., 1992), and CR and + +guaranteed logics (Barceló et al., 2020). The optimization of algebraic computations and the use of treewidth relates to the approaches by Aji & McEliece (2000) and Abo Khamis et al. (2016). + +# 2 BACKGROUND + +We denote sets by $\{\}$ and multisets by $\{\{\}\}$ . For $n\in \mathbb{N}$ , $n > 0$ , $[n]\coloneqq \{1,\ldots ,n\}$ . Vectors are denoted by $\pmb {v},\pmb {w},\dots$ , matrices by $\pmb {A},\pmb {B},\dots$ , and tensors by $\mathbf{S},\mathbf{T},\dots$ . Furthermore, $v_{i}$ is the $i$ -th entry of vector $\pmb{v}$ , $A_{ij}$ is the $(i,j)$ -th entry of matrix $\pmb{A}$ and $\mathcal{S}_i$ denotes the $\pmb{i} = (i_1,\dots,i_k)$ -th entry of a tensor $\mathbf{S}$ . If certain dimensions are unspecified, then this is denoted by a “:”. For example, $A_{i}$ : and $A_{j}$ : denote the $i$ -th row and $j$ -th column of matrix $\pmb{A}$ , respectively. Similarly for slices of tensors. + +We consider undirected simple graphs $G = (V_{G}, E_{G}, \mathsf{col}_{G})$ equipped with a vertex-labelling $\mathsf{col}_G: V_G \to \mathbb{R}^\ell$ . We assume that graphs have size $n$ , so $V_{G}$ consists of $n$ vertices and we often identify $V_{G}$ with $[n]$ . For a vertex $v \in V_{G}$ , $N_{G}(v) := \{u \in V_{G} \mid vu \in E_{G}\}$ . We let $\mathcal{G}$ be the set of all graphs of size $n$ and let $\mathcal{G}_s$ be the set of pairs $(G, \mathbf{v})$ with $G \in \mathcal{G}$ and $\mathbf{v} \in V_G^s$ . Note that $\mathcal{G} = \mathcal{G}_0$ . + +The color refinement algorithm (CR) (Morgan, 1965) iteratively computes vertex labellings based on neighboring vertices, as follows. For a graph $G$ and vertex $v \in V_G$ , $\mathsf{cr}^{(0)}(G,v) \coloneqq \mathsf{col}_G(v)$ . Then, for $t \geq 0$ , $\mathsf{cr}^{(t+1)}(G,v) \coloneqq (\mathsf{cr}^{(t)}(G,v), \{\{\mathsf{cr}^{(t)}(G,u) \mid u \in N_G(v)\}\})$ . We collect all vertex labels to obtain a label for the entire graph by defining $\mathsf{gcr}^{(t)}(G) \coloneqq \{\{\mathsf{cr}^{(t)}(G,v) \mid v \in V_G\}\}$ . The $k$ -dimensional Weisfeiler-Leman algorithm ( $k$ -WL) (Cai et al., 1992) iteratively computes labellings of $k$ -tuples of vertices. For a $k$ -tuple $v$ , its atomic type in $G$ , denoted by $\mathsf{atp}_k(G,v)$ , is a vector in $\mathbb{R}^{2\binom{k}{2} + k\ell}$ . The first $\binom{k}{2}$ entries are $0/1$ -values encoding the equality type of $v$ , i.e., whether $v_i = v_j$ for $1 \leq i < j \leq k$ . The second $\binom{k}{2}$ entries are $0/1$ -values encoding adjacency information, i.e., whether $v_i v_j \in E_G$ for $1 \leq i < j \leq k$ . The last $k\ell$ real-valued entries correspond to $\mathsf{col}_G(v_i) \in \mathbb{R}^\ell$ for $1 \leq i \leq k$ . Initially, for a graph $G$ and $v \in V_G^k$ , $k$ -WL assigns the label $\mathsf{wl}_k^{(0)}(G,v) \coloneqq \mathsf{atp}_k(G,v)$ . For $t \geq 0$ , $k$ -WL revises the label according to $\mathsf{wl}_k^{(t+1)}(G,v) \coloneqq (\mathsf{wl}_k^{(t)}(G,v), M)$ with $M := \{\{\mathsf{atp}_{k+1}(G,vu), \mathsf{wl}_k^{(t)}(G,v[u/1]), \ldots, \mathsf{wl}_k^{(t)}(G,v[u/k])\} | u \in V_G\}$ , where $v[u/i] := (v_1, \ldots, v_{i-1}, u, v_{i+1}, \ldots, v_k)$ . We use $k$ -WL to assign labels to vertices and graphs by defining: $\mathsf{vwl}_k^{(t)}(G,v) \coloneqq \mathsf{wl}_k^{(t)}(G,(v,\ldots,v))$ , for vertex-labellings, and $\mathsf{gwll}_k^{(t)} := \{\{\mathsf{wl}_k^{(t)}(G,v) | v \in V_G^k\} \}$ , for graph-labellings. We use $\mathsf{cr}^{(\infty)}, \mathsf{gcr}^{(\infty)}, \mathsf{vwl}_k^{(\infty)}$ , and $\mathsf{gwll}_k^{(\infty)}$ to denote the stable labellings produced by the corresponding algorithm over an arbitrary number of rounds. Our version of 1-WL differs from CR in that 1-WL also uses information from non-adjacent vertices; this distinction only matters for vertex embeddings (Grohe, 2021). We use the "folklore" $k$ -WL of Cai et al. (1992), except Cai et al. use 1-WL to refer to CR. While equivalent to "oblivious" ( $k+1$ -WL (Grohe, 2021), used in some other works on GNNs, care is needed when comparing to our work. + +Let $G$ be a graph with $V_{G} = [n]$ and let $\sigma$ be a permutation of $[n]$ . We denote by $\sigma \star G$ the isomorphic copy of $G$ obtained by applying the permutation $\sigma$ . Similarly, for $\pmb{v} \in V_G^k$ , $\sigma \star v$ is the permuted version of $\pmb{v}$ . Let $\mathbb{F}$ be some feature space. A function $f: \mathcal{G}_0 \to \mathbb{F}$ is called invariant if $f(G) = f(\sigma \star G)$ for any permutation $\pi$ . More generally, $f: \mathcal{G}_s \to \mathbb{F}$ is equivariant if $f(\sigma \star G, \sigma \star v) = f(G, v)$ for any permutation $\sigma$ . The functions $\mathrm{cr}^{(t)}: \mathcal{G}_1 \to \mathbb{F}$ and $\mathrm{vw}|_k^{(t)}: \mathcal{G}_1 \to \mathbb{F}$ are equivariant, whereas $\mathrm{gcr}^{(t)}: \mathcal{G}_0 \to \mathbb{F}$ and $\mathrm{gw}|_k^{(t)}: \mathcal{G}_0 \to \mathbb{F}$ are invariant, for any $t \geq 0$ and $k \geq 1$ . + +# 3 SPECIFYING GNNS + +Many GNNs use linear algebra computations on vectors, matrices or tensors, interleaved with the application of activation functions or MLPs. To understand the separation power of GNNs, we introduce a specification language, TL, for tensor language, that allows us to specify any algebraic computation in a procedural way by explicitly stating how each entry is to be computed. We gauge the separation power of GNNs by specifying them as TL expressions, and syntactically analyzing the components of such TL expressions. This technique gives rise to Higher-Order Message-Passing Neural Networks (or $k$ -MPNNs), a natural extension of MPNNs (Gilmer et al., 2017). For simplicity, we present TL using summation aggregation only but arbitrary aggregation functions on multiset of real values can be used as well (Section C.5 in the supplementary material). + +To introduce TL, consider a typical layer in a GNN of the form $\pmb{F}^{\prime} = \sigma (\pmb{A}\cdot \pmb{F}\cdot \pmb{W})$ , where $\pmb{A}\in \mathbb{R}^{n\times n}$ is an adjacency matrix, $\pmb {F}\in \mathbb{R}^{n\times \ell}$ are vertex features such that $F_{i:}\in \mathbb{R}^{\ell}$ is the feature vector of vertex $i$ , $\sigma$ is a non-linear activation function, and $W\in \mathbb{R}^{\ell \times \ell}$ is a weight matrix. By exposing the indices in the matrices and vectors we can equivalently write: for $i\in [n]$ and $s\in [\ell ]$ : + +$$ +F _ {i s} ^ {\prime} := \sigma \left(\sum_ {j \in [ n ]} A _ {i j} \cdot \left(\sum_ {t \in [ \ell ]} W _ {t s} \cdot F _ {j t}\right)\right). +$$ + +In TL, we do not work with specific matrices or indices ranging over $[n]$ , but focus instead on expressions applicable to any matrix. We use index variables $x_{1}$ and $x_{2}$ instead of $i$ and $j$ , replace $A_{ij}$ with a placeholder $E(x_{1},x_{2})$ and $F_{jt}$ with placeholders $P_{t}(x_{2})$ , for $t\in [\ell ]$ . We then represent the above computation in TL by $\ell$ expressions $\psi_s(x_1)$ , one for each feature column, as follows: + +$$ +\psi_ {s} (x _ {1}) = \sigma \Big (\sum_ {x _ {2}} E (x _ {1}, x _ {2}) \cdot \big (\sum_ {t \in [ \ell ]} W _ {t s} \cdot P _ {t} (x _ {2}) \big) \Big). +$$ + +These are pure syntactical expressions. To give them a semantics, we assign to $E$ a matrix $\mathbf{A} \in \mathbb{R}^{n \times n}$ , to $P_{t}$ column vectors $\mathbf{F}_{:t} \in \mathbb{R}^{n \times 1}$ , for $t \in [\ell]$ , and to $x_{1}$ an index $i \in [n]$ . By letting the variable $x_{2}$ under the summation range over $1, 2, \ldots, n$ , the TL expression $\psi_{s}(i)$ evaluates to $F_{is}'$ . As such, $\mathbf{F}' = \sigma(\mathbf{A} \cdot \mathbf{F} \cdot \mathbf{W})$ can be represented as a specific instance of the above TL expressions. Throughout the paper we reason about expressions in TL rather than specific instances thereof. Importantly, by showing that certain properties hold for expressions in TL, these properties are inherited by all of its instances. We use TL to enable a theoretical analysis of the separating power of GNNs; It is not intended as a practical programming language for GNNs. + +Syntax. We first give the syntax of TL expressions. We have a binary predicate $E$ , to represent adjacency matrices, and unary vertex predicates $P_{s}$ , $s \in [\ell]$ , to represent column vectors encoding the $\ell$ -dimensional vertex labels. In addition, we have a (possibly infinite) set $\Omega$ of functions, such as activation functions or MLPs. Then, $\mathsf{TL}(\Omega)$ expressions are defined by the following grammar: + +$$ +\varphi := \mathbf {1} _ {x \text {o p} y} \mid E (x, y) \mid P _ {s} (x) \mid \varphi \cdot \varphi \mid \varphi + \varphi \mid a \cdot \varphi \mid f (\varphi , \dots , \varphi) \mid \sum_ {x} \varphi +$$ + +where $\mathrm{op} \in \{=, \neq\}$ , $x, y$ are index variables that specify entries in tensors, $s \in [\ell]$ , $a \in \mathbb{R}$ , and $f \in \Omega$ . Summation aggregation is captured by $\sum_{x} \varphi$ .1 We sometimes make explicit which functions are used in expressions in $\mathsf{T}\mathsf{L}(\Omega)$ by writing $\mathsf{T}\mathsf{L}(f_1, f_2, \ldots)$ for $f_1, f_2, \ldots$ in $\Omega$ . For example, the expressions $\psi_s(x_1)$ described earlier are in $\mathsf{T}\mathsf{L}(\sigma)$ . + +The set of free index variables of an expression $\varphi$ , denoted by $\mathrm{free}(\varphi)$ , determines the order of the tensor represented by $\varphi$ . It is defined inductively: $\mathrm{free}(\mathbf{1}_{x \circ p} y) = \mathrm{free}(E(x, y)) := \{x, y\}$ , $\mathrm{free}(P_s(x)) = \{x\}$ , $\mathrm{free}(\varphi_1 \cdot \varphi_2) = \mathrm{free}(\varphi_1 + \varphi_2) := \mathrm{free}(\varphi_1) \cup \mathrm{free}(\varphi_2)$ , $\mathrm{free}(a \cdot \varphi_1) := \mathrm{free}(\varphi_1)$ , $\mathrm{free}(f(\varphi_1, \ldots, \varphi_p)) := \cup_{i \in [p]} \mathrm{free}(\varphi_i)$ , and $\mathrm{free}(\sum_x \varphi_1) := \mathrm{free}(\varphi_1) \setminus \{x\}$ . We sometimes explicitly write the free indices. In our example expressions $\psi_s(x_1)$ , $x_1$ is the free index variable. + +An important class of expressions are those that only use index variables $\{x_{1},\ldots ,x_{k}\}$ . We denote by $\mathsf{T L}_k(\Omega)$ the $k$ -index variable fragment of $\mathsf{T L}(\Omega)$ . The expressions $\psi_s(x_1)$ are in $\mathsf{T L}_2(\sigma)$ . + +Semantics. We next define the semantics of expressions in $\mathsf{TL}(\Omega)$ . Let $G = (V_G, E_G, \mathrm{col}_G)$ be a vertex-labelled graph. We start by defining the interpretation $[\cdot, \nu]_G$ of the predicates $E$ , $P_s$ and the (dis)equality predicates, relative to $G$ and a valuation $\nu$ assigning a vertex to each index variable: + +$$ +\begin{array}{l l} \llbracket E (x, y), \nu \rrbracket_ {G} := \text {i f} \nu (x) \nu (y) \in E _ {G} \text {t h e n 1 e l s e 0} & \llbracket P _ {s} (x), \nu \rrbracket_ {G} := \operatorname {c o l} _ {G} (\nu (x)) _ {s} \in \mathbb {R} \\ \llbracket \mathbf {1} _ {x \text {o p} y}, \nu \rrbracket_ {G} := \text {i f} \nu (x) \text {o p} \nu (y) \text {t h e n 1 e l s e 0}. \end{array} +$$ + +In other words, $E$ is interpreted as the adjacency matrix of $G$ and the $P_{s}$ 's interpret the vertex-labelling $\mathrm{col}_G$ . Furthermore, we lift interpretations to arbitrary expressions in $\mathsf{T}\mathsf{L}(\Omega)$ , as follows: + +$$ +\begin{array}{l l} \llbracket \varphi_ {1} \cdot \varphi_ {2}, \nu \rrbracket_ {G} := \llbracket \varphi_ {1}, \nu \rrbracket_ {G} \cdot \llbracket \varphi_ {2}, \nu \rrbracket_ {G} & \llbracket \varphi_ {1} + \varphi_ {2}, \nu \rrbracket_ {G} := \llbracket \varphi_ {1}, \nu \rrbracket_ {G} + \llbracket \varphi_ {2}, \nu \rrbracket_ {G} \\ \llbracket \sum_ {x} \varphi_ {1}, \nu \rrbracket_ {G} := \sum_ {v \in V _ {G}} \llbracket \varphi_ {1}, \nu [ x \mapsto v ] \rrbracket_ {G} & \llbracket a \cdot \varphi_ {1}, \nu \rrbracket_ {G} := a \cdot \llbracket \varphi_ {1}, \nu \rrbracket_ {G} \end{array} +$$ + +$$ +\llbracket f (\varphi_ {1}, \dots , \varphi_ {p}), \nu \rrbracket_ {G} := f (\llbracket \varphi_ {1}, \nu \rrbracket_ {G}, \dots , \llbracket \varphi_ {p}, \nu \rrbracket_ {G}) +$$ + +where, $\nu[x \mapsto v]$ is the valuation $\nu$ but which now maps the index $x$ to the vertex $v \in V_G$ . For simplicity, we identify valuations with their images. For example, $\llbracket \varphi(x), v \rrbracket_G$ denotes $\llbracket \varphi(x), x \mapsto v \rrbracket_G$ . To illustrate the semantics, for each $v \in V_G$ , our example expressions satisfy $\llbracket \psi_s, v \rrbracket_G = F_{vs}'$ for $F' = \sigma(A \cdot F \cdot W)$ when $A$ is the adjacency matrix of $G$ and $F$ represents the vertex labels. + +$k$ -MPNNs. Consider a function $f: \mathcal{G}_s \to \mathbb{R}^\ell: (G, \pmb{v}) \mapsto f(G, \pmb{v}) \in \mathbb{R}^\ell$ for some $\ell \in \mathbb{N}$ . We say that the function $f$ can be represented in $\mathsf{TL}(\Omega)$ if there exists $\ell$ expressions $\varphi_1(x_1, \ldots, x_s), \ldots, \varphi_\ell(x_1, \ldots, x_s)$ in $\mathsf{TL}(\Omega)$ such that for each graph $G$ and each $s$ -tuple $\pmb{v} \in V_G^s$ : + +$$ +f (G, \boldsymbol {v}) = \left(\llbracket \varphi_ {1}, \boldsymbol {v} \rrbracket_ {G}, \dots , \llbracket \varphi_ {\ell}, \boldsymbol {v} \rrbracket_ {G}\right). +$$ + +Of particular interest are $k$ th-order MPNNs (or $k$ -MPNNs) which refers to the class of functions that can be represented in $\mathsf{TL}_{k+1}(\Omega)$ . We can regard GNNs as functions $f: \mathcal{G}_s \to \mathbb{R}^\ell$ . Hence, a GNN is a $k$ -MPNN if its corresponding functions are $k$ -MPNNs. For example, we can interpret $\pmb{F}' = \sigma(\pmb{A} \cdot \pmb{F} \cdot \pmb{W})$ as a function $f: \mathcal{G}_1 \to \mathbb{R}^\ell$ such that $f(G, v) := \pmb{F}_{v'}'$ . We have seen that for each $s \in [\ell]$ , $[\psi_s, v]_G = F_{vs}'$ with $\psi_s \in \mathsf{TL}_2(\sigma)$ . Hence, $f(G, v) = ([\psi_1, v]_G, \ldots, [[\psi_\ell, v]_G)$ and thus $f$ belongs to 1-MPNNs and our example GNN is a 1-MPNN. + +TL represents equivariant or invariant functions. We make a simple observation which follows from the type of operators allowed in expressions in $\mathsf{TL}(\Omega)$ . + +Proposition 3.1. Any function $f: \mathcal{G}_s \to \mathbb{R}^\ell$ represented in $\mathrm{TL}(\Omega)$ is equivariant (invariant if $s = 0$ ). + +An immediate consequence is that when a GNN is a $k$ -MPNN, it is automatically invariant or equivariant, depending on whether graph or vertex tuple embeddings are considered. + +# 4 SEPARATION POWER OF TENSOR LANGUAGE + +Our first main results concern the characterization of the separation power of tensor languages in terms of the color refinement and $k$ -dimensional Weisfeiler-Leman algorithms. We provide a fine-grained characterization by taking the number of rounds of these algorithms into account. This will allow for measuring the separation power of classes of GNNs in terms of their number of layers. + +# 4.1 SEPARATION POWER + +We define the separation power of graph functions in terms of an equivalence relation, based on the definition from Azizian & Lelarge (2021), hereby first focusing on their ability to separate vertices. + +Definition 1. Let $\mathcal{F}$ be a set of functions $f: \mathcal{G}_1 \to \mathbb{R}^{\ell_f}$ . The equivalence relation $\rho_1(\mathcal{F})$ is defined by $\mathcal{F}$ on $\mathcal{G}_1$ as follows: $(G, v), (H, w)) \in \rho_1(\mathcal{F}) \Longleftrightarrow \forall f \in \mathcal{F}, f(G, v) = f(H, w)$ . + +In other words, when $((G,v),(H,w))\in \rho_1(\mathcal{F})$ , no function in $\mathcal{F}$ can separate $v$ in $G$ from $w$ in $H$ . For example, we can view $\mathsf{cr}^{(t)}$ and $\mathsf{vw}^{\mathsf{l}_k^{(t)}}$ as functions from $\mathcal{G}_1$ to some $\mathbb{R}^\ell$ . As such $\rho_1(\mathsf{cr}^{(t)})$ and $\rho_{1}(\mathsf{vw}^{\mathsf{l}_{k}^{(t)}})$ measure the separation power of these algorithms. The following strict inclusions are known: for all $k\geq 1$ , $\rho_{1}(\mathsf{vw}^{\mathsf{l}_{k + 1}^{(t)}})\subset \rho_{1}(\mathsf{vw}^{\mathsf{l}_{k}^{(t)}})$ and $\rho_{1}(\mathsf{vw}^{\mathsf{l}_{1}^{(t)}})\subset \rho_{1}(\mathsf{cr}^{(t)})$ (Otto, 2017; Grohe, 2021). It is also known that more rounds $(t)$ increase the separation power of these algorithms (Fürer, 2001). + +For a fragment $\mathcal{L}$ of $\mathrm{TL}(\Omega)$ expressions, we define $\rho_{1}(\mathcal{L})$ as the equivalence relation associated with all functions $f:\mathcal{G}_1\to \mathbb{R}^{\ell_f}$ that can be represented in $\mathcal{L}$ . By definition, we here thus consider expressions in $\mathrm{TL}(\Omega)$ with one free index variable resulting in vertex embeddings. + +# 4.2 MAIN RESULTS + +We first provide a link between $k$ -WL and tensor language expressions using $k + 1$ index variables: + +Theorem 4.1. For each $k\geq 1$ and any collection $\Omega$ of functions, $\rho_{1}\big(\mathsf{wcl}_{k}^{(\infty)}\big) = \rho_{1}\big(\mathsf{T}\mathsf{L}_{k + 1}(\Omega)\big)$ . + +This theorem gives us new insights: if we wish to understand how a new GNN architecture compares against the $k$ -WL algorithms, all we need to do is to show that such an architecture can be represented in $\mathsf{TL}_{k+1}(\Omega)$ , i.e., is a $k$ -MPNN, an arguably much easier endeavor. As an example of how to use this result, it is well known that triangles can be detected by 2-WL but not by 1-WL. Thus, in order to design GNNs that can detect triangles, layer definitions in $\mathsf{TL}_3$ rather than $\mathsf{TL}_2$ should be used. + +We can do much more, relating the rounds of $k$ -WL to the notion of summation depth of $\mathsf{TL}(\Omega)$ expressions. We also present present similar results for functions computing graph embeddings. + +The summation depth $\operatorname{sd}(\varphi)$ of a $\mathsf{TL}(\Omega)$ expression $\varphi$ measures the nesting depth of the summations $\sum_{x}$ in the expression. It is defined inductively: $\operatorname{sd}(\mathbf{1}_{x\circ p}y) = \operatorname{sd}(E(x,y)) = \operatorname{sd}(P_s(x)) := 0$ , $\operatorname{sd}(\varphi_1\cdot \varphi_2) = \operatorname{sd}(\varphi_1 + \varphi_2) := \max \{\operatorname{sd}(\varphi_1),\operatorname{sd}(\varphi_2)\}$ , $\operatorname{sd}(a\cdot \varphi_1) := \operatorname{sd}(\varphi_1)$ , $\operatorname{sd}(f(\varphi_1,\dots ,\varphi_p)) := \max \{\operatorname{sd}(\varphi_i)|i\in [p]\}$ , and $\operatorname{sd}(\sum_x\varphi_1) := \operatorname{sd}(\varphi_1) + 1$ . For example, expressions $\psi_{s}(x_{1})$ above have summation depth one. We write $\mathsf{T L}_{k + 1}^{(t)}(\Omega)$ for the class of expressions in $\mathsf{T L}_{k + 1}(\Omega)$ of summation depth at most $t$ , and use $k$ -MPNN for the corresponding class of $k$ -MPNNs. We can now refine Theorem 4.1, taking into account the number of rounds used in $k$ -WL. + +Theorem 4.2. For all $t \geq 0$ , $k \geq 1$ and any collection $\Omega$ of functions, $\rho_1(\mathsf{wcl}_k^{(t)}) = \rho_1(\mathsf{T}\mathsf{L}_{k + 1}^{(t)}(\Omega))$ . + +Guarded TL and color refinement. As noted by Barceló et al. (2020), the separation power of vertex embeddings of simple GNNs, which propagate information only through neighboring vertices, is usually weaker than that of 1-WL. For these types of architectures, Barceló et al. (2020) provide a relation with the weaker color refinement algorithm, but only in the special case of first-order classifiers. We can recover and extend this result in our general setting, with a guarded version of TL which, as we will show, has the same separation power as color refinement. + +The guarded fragment $\mathrm{GTL}(\Omega)$ of $\mathsf{T L}_2(\Omega)$ is inspired by the use of adjacency matrices in simple GNNs. In $\mathrm{GTL}(\Omega)$ only equality predicates $\mathbf{1}_{x_i = x_i}$ (constant 1) and $\mathbf{1}_{x_i\neq x_i}$ (constant 0) are allowed, addition and multiplication require the component expressions to have the same (single) free index, and summation must occur in a guarded form $\sum_{x_j}\left(E(x_i,x_j)\cdot \varphi (x_j)\right)$ , for $i,j\in [2]$ . Guardedness means that summation only happens over neighbors. In $\mathrm{GTL}(\Omega)$ , all expressions have a single free variable and thus only functions from $\mathcal{G}_1$ can be represented. Our example expressions $\psi_s(x_1)$ are guarded. The fragment $\mathrm{GTL}^{(t)}(\Omega)$ consists of expressions in $\mathrm{GTL}(\Omega)$ of summation depth at most $t$ . We denote by MPNNs and MPNNs(t) the corresponding "guarded" classes of 1-MPNNs.3 + +Theorem 4.3. For all $t\geq 0$ and any collection $\Omega$ of functions: $\rho_{1}\bigl (\mathsf{cr}^{(t)}\bigr) = \rho_{1}\bigl (\mathsf{GTL}^{(t)}(\Omega)\bigr).$ + +As an application of this theorem, to detect the existence of paths of length $t$ , the number of guarded layers in GNNs should account for a representation in $\mathrm{GTL}(\Omega)$ of summation depth of at least $t$ . We recall that $\rho_1(\mathsf{vw}|_1^{(t)}) \subset \rho_1(\mathsf{cr}^{(t)})$ which, combined with our previous results, implies that $\mathsf{T}\mathsf{L}_2^{(t)}(\Omega)$ (resp., 1-MPNNs) is strictly more separating than $\mathsf{GTL}^{(t)}(\Omega)$ (resp., MPNNs). + +Graph embeddings. We next establish connections between the graph versions of $k$ -WL and CR, and TL expressions without free index variables. To this aim, we use $\rho_0(\mathcal{F})$ , for a set $\mathcal{F}$ of functions $f: \mathcal{G} \to \mathbb{R}^{\ell_f}$ , as the equivalence relation over $\mathcal{G}$ defined in analogy to $\rho_1: (G, H) \in \rho_0(\mathcal{F}) \Longleftrightarrow \forall f \in \mathcal{F}, f(G) = f(H)$ . We thus consider separation power on the graph level. For example, we can consider $\rho_0(\mathrm{gcr}^{(t)})$ and $\rho_0(\mathrm{gw}|_k^{(t)})$ for any $t \geq 0$ and $k \geq 1$ . Also here, $\rho_0(\mathrm{gw}|_{k+1}^{(t)}) \subset \rho_0(\mathrm{gw}|_k^{(t)})$ but different from vertex embeddings, $\rho_0(\mathrm{gcr}^{(t)}) = \rho_0(\mathrm{gw}|_1^{(t)})$ (Grohe, 2021). We define $\rho_0(\mathcal{L})$ for a fragment $\mathcal{L}$ of $\mathrm{T}\mathsf{L}(\Omega)$ by considering expressions without free index variables. + +The connection between the number of index variables in expressions and $k$ -WL remains to hold. Apart from $k = 1$ , no clean relationship exists between summation depth and rounds, however.4 + +Theorem 4.4. For all $t \geq 0$ , $k \geq 1$ and any collection $\Omega$ of functions, we have that: + +$$ +(1) \quad \rho_ {0} \left(\mathbf {g c r} ^ {(t)}\right) = \rho_ {0} \left(\mathsf {T L} _ {2} ^ {(t + 1)} (\Omega)\right) = \rho_ {0} \left(\mathbf {g w l} _ {1} ^ {(t)}\right) \quad (2) \quad \rho_ {0} \left(\mathbf {g w l} _ {k} ^ {(\infty)}\right) = \rho_ {0} \left(\mathsf {T L} _ {k + 1} (\Omega)\right). +$$ + +Intuitively, in (1) the increase in summation depth by one is incurred by the additional aggregation needed to collect all vertex labels computed by $\mathsf{gw}_{1}^{(t)}$ . + +Optimality of number of indices. Our results so far tell that graph functions represented in $\mathsf{TL}_{k + 1}(\Omega)$ are at most as separating as $k$ -WL. What is left unaddressed is whether all $k + 1$ index variables are needed for the graph functions under consideration. It may well be, for example, that there exists an equivalent expression using less index variables. This would imply a stronger upper bound on the separation power by $\ell$ -WL for $\ell < k$ . We next identify a large class of $\mathsf{TL}(\Omega)$ expressions, those of treewidth $k$ , for which the number of index variables can be reduced to $k + 1$ . + +Proposition 4.5. Expressions in $\mathsf{TL}(\Omega)$ of treewidth $k$ are equivalent to expressions in $\mathsf{TL}_{k + 1}(\Omega)$ . + +Treewidth is defined in the supplementary material (Section G) and a treewidth of $k$ implies that the computation of tensor language expressions can be decomposed, by reordering summations, such that each local computation requires at most $k + 1$ indices (see also Aji & McEliece (2000)). As a simple example, consider $\theta(x_1) = \sum_{x_2} \sum_{x_3} E(x_1, x_2) \cdot E(x_2, x_3)$ in $\mathsf{TL}_3^{(2)}$ such that $[\theta, v]_G$ counts the number of paths of length two starting from $v$ . This expression has a treewidth of one. And indeed, it is equivalent to the expression $\tilde{\theta}(x_1) = \sum_{x_2} E(x_1, x_2) \cdot \left( \sum_{x_1} E(x_2, x_1) \right)$ in $\mathsf{TL}_2^{(2)}$ (and in fact in $\mathsf{GTL}^{(2)}$ ). As a consequence, no more vertices can be separated by $\theta(x_1)$ than by $\mathsf{cr}^{(2)}$ , rather than $\mathsf{vw}_2^2$ as the original expression in $\mathsf{TL}_3^{(2)}$ suggests. + +On the impact of functions. All separation results for $\mathsf{TL}(\Omega)$ and fragments thereof hold regardless of the chosen functions in $\Omega$ , including when no functions are present at all. Function applications hence do not add expressive power. While this may seem counter-intuitive, it is due to the presence of summation and multiplication in $\mathsf{TL}$ that are enough to separate graphs or vertices. + +# 5 CONSEQUENCES FOR GNNS + +We next interpret the general results on the separation power from Section 4 in the context of GNNs. + +1. The separation power of any vertex embedding GNN architecture which is an $\mathsf{MPNN}^{(t)}$ is bounded by the power of $t$ rounds of color refinement. + +We consider the Graph Isomorphism Networks (GINs) (Xu et al., 2019) and show that these are MPNNs. To do so, we represent them in $\mathsf{GTL}(\Omega)$ . Let $\mathsf{gin}$ be such a network; it updates vertex embeddings as follows. Initially, $\mathsf{gin}^{(0)}: \mathcal{G}_1 \to \mathbb{R}^{\ell_0}: (G, v) \mapsto \mathbf{F}_{v:}^{(0)} := \mathsf{col}_G(v) \in \mathbb{R}^{\ell_0}$ . For layer $t > 0$ , $\mathsf{gin}^{(t)}: \mathcal{G}_1 \to \mathbb{R}^{\ell_t}$ is given by: $(G, v) \mapsto \mathbf{F}_{v:}^{(t)} := \mathsf{mlp}^{(t)}\big(\mathbf{F}_{v:}^{(t-1)}, \sum_{u \in N_G(v)} \mathbf{F}_{u:}^{(t-1)}\big)$ , with $\mathbf{F}^{(t)} \in \mathbb{R}^{n \times \ell_t}$ and $\mathsf{mlp}^{(t)} = (\mathsf{mlp}_1^{(t)}, \ldots, \mathsf{mlp}_{\ell_t}^{(t)}) : \mathbb{R}^{2\ell_t-1} \to \mathbb{R}^{\ell_t}$ is an MLP. We denote by $\mathsf{GIN}^{(t)}$ the class of GINs consisting $t$ layers. Clearly, $\mathsf{gin}^{(0)}$ can be represented in $\mathsf{GTL}^{(0)}$ by considering the expressions $\varphi_i^{(0)}(x_1) := P_i(x_1)$ for each $i \in [\ell_0]$ . To represent $\mathsf{gin}^{(t)}$ , assume that we have $\ell_{t-1}$ expressions $\varphi_i^{(t-1)}(x_1)$ in $\mathsf{GTL}^{(t-1)}(\Omega)$ representing $\mathsf{gin}^{(t-1)}$ . That is, we have $[\varphi_i^{(t-1)}, v]_G = F_{vi}^{(t-1)}$ for each vertex $v$ and $i \in [\ell_{t-1}]$ . Then $\mathsf{gin}^{(t)}$ is represented by $\ell_t$ expressions $\varphi_i^{(t)}(x_1)$ defined as: + +$\mathsf{mlp}_i^{(t)}\Big(\varphi_1^{(t - 1)}(x_1),\ldots ,\varphi_{\ell_{t - 1}}^{(t - 1)}(x_1),\sum_{x_2}E(x_1,x_2)\cdot \varphi_1^{(t - 1)}(x_2),\ldots ,\sum_{x_2}E(x_1,x_2)\cdot \varphi_{\ell_{t - 1}}^{(t - 1)}(x_2)\Big),$ which are now expressions in $\mathsf{GTL}^{(t)}(\Omega)$ where $\Omega$ consists of MLPs. We have $\llbracket \varphi_i^{(t)},v\rrbracket_G = F_{v,i}^{(t)}$ for each $v\in V_G$ and $i\in [\ell_t]$ , as desired. Hence, Theorem 4.3 tells that $t$ -layered GINs cannot be more separating than $t$ rounds of color refinement, in accordance with known results (Xu et al., 2019; Morris et al., 2019). We thus simply cast GINs in $\mathsf{GTL}(\Omega)$ to obtain an upper bound on their separation power. In the supplementary material (Section D) we give similar analyses for GraphSage GNNs with various aggregation functions (Hamilton et al., 2017), GCNs (Kipf & Welling, 2017), simplified GCNs (SGCs) (Wu et al., 2019), Principled Neighborhood Aggregation (PNAs) (Corso et al., 2020), and revisit the analysis of ChebNet (Defferrard et al., 2016) given in Balcilar et al. (2021a). + +2. The separation power of any vertex embedding GNN architecture which is an $k$ -MPNN $^{(t)}$ is bounded by the power of $t$ rounds of $k$ -WL. + +For $k = 1$ , we consider extended Graph Isomorphism Networks (eGINs) (Barceló et al., 2020). For an $\mathrm{egin} \in \mathrm{eGIN}$ , $\mathrm{egin}^{(0)}: \mathcal{G}_1 \to \mathbb{R}^{\ell_0}$ is defined as for GINs, but for layer $t > 0$ , $\mathrm{egin}^{(t)}: \mathcal{G}_1 \to \mathbb{R}^{\ell_t}$ is defined by $(G,v) \mapsto F_{v:}^{(t)} := \mathrm{mlp}^{(t)}\big(F_{v:}^{(t-1)}, \sum_{u \in N_G(v)} F_{u:}^{(t-1)}, \sum_{u \in V_G} F_{u:}^{(t-1)}\big)$ , where $\mathrm{mlp}^{(t)}$ is now an MLP from $\mathbb{R}^{3\ell_{t-1}} \to \mathbb{R}^{\ell_t}$ . The difference with GINs is the use of $\sum_{u \in V_G} F_{u:}^{(t-1)}$ which corresponds to the unguarded summation $\sum_{x_1} \varphi^{(t-1)}(x_1)$ . This implies that TL rather than GTL needs to be used. In a similar way as for GINs, we can represent eGIN layers in $\mathsf{T L}_{2}^{(t)}(\Omega)$ . That is, each $\mathrm{eGIN}^{(t)}$ is an 1-MPNN $^{(t)}$ . Theorem 4.2 tells that $t$ rounds of 1-WL bound the separation power of $t$ -layered extended GINs, conform to Barceló et al. (2020). More generally, any GNN looking to go beyond CR must use non-guarded aggregations. + +For $k \geq 2$ , it is straightforward to show that $t$ -layered "folklore" GNNs ( $k$ -FGNNs) (Maron et al., 2019b) are $k$ -MPNN $^{(t)}$ and thus, by Theorem 4.2, $t$ rounds of $k$ -WL bound their separation power. + +One merely needs to cast the layer definitions in $\mathsf{TL}(\Omega)$ and observe that $k + 1$ indices and summation depth $t$ are needed. We thus refine and recover the $k$ -WL bound for $k$ -FGNNs by Azizian & Lelarge (2021). We also show that the separation power of $(k + 1)$ -Invariant Graph Networks $((k + 1)$ -IGNs) (Maron et al., 2019b) are bounded by $k$ -WL, albeit with an increase in the required rounds. + +Theorem 5.1. For any $k \geq 1$ , the separation power of a t-layered $(k + 1)$ -IGNs is bounded by the separation power of $tk$ rounds of $k$ -WL. + +We hereby answer open problem 1 in Maron et al. (2019a). The case $k = 1$ was solved in Chen et al. (2020) by analyzing properties of 1-WL. By contrast, Theorem 4.2 shows that one can focus on expressing $(k + 1)$ -IGNs in $\mathsf{TL}_{k + 1}(\Omega)$ and analyzing the summation depth of expressions. The proof of Theorem 5.1 requires non-trivial manipulations of tensor language expressions; it is a simplified proof of Geerts (2020). The additional rounds $(tk)$ are needed because $(k + 1)$ -IGNs aggregate information in one layer that becomes accessible to $k$ -WL in $k$ rounds. We defer detail to Section E in the supplementary material, where we also identify a simple class of $t$ -layered $(k + 1)$ -IGNs that are as powerful as $(k + 1)$ -IGNs but whose separation power is bounded by $t$ rounds of $k$ -WL. + +We also consider "augmented" GNNs, which are combined with a preprocessing step in which higher-order graph information is computed. In the supplementary material (Section D.3) we show how TL encodes the preprocessing step, and how this leads to separation bounds in terms of $k$ -WL, where $k$ depends on the treewidth of the graph information used. Finally, our approach can also be used to show that the spectral CayleyNets (Levie et al., 2019) are bounded in separation power by 2-WL. This result complements the spectral analysis of CayleyNets given in Balcilar et al. (2021b). + +3. The separation power of any graph embedding GNN architecture which is a $k$ -MPNN is bounded by the power of $k$ -WL. + +Graph embedding methods are commonly obtained from vertex (tuple) embeddings methods by including a readout layer in which all vertex (tuple) embeddings are aggregated. For example, $\mathsf{mlp}\left(\sum_{v\in V}\mathsf{egin}^{(t)}(G,v)\right)$ is a typical readout layer for eGINs. Since $\mathsf{egin}^{(t)}$ can be represented in $\mathsf{T L}_{2}^{(t)}(\Omega)$ , the readout layer can be represented in $\mathsf{T L}_{2}^{(t + 1)}(\Omega)$ , using an extra summation. So they are 1-MPNNs. Hence, their separation power is bounded by $\mathsf{gw l}_{1}^{(t)}$ , in accordance with Theorem 4.4. This holds more generally. If vertex embedding methods are $k$ -MPNNs, then so are their graph versions, which are then bounded by $\mathsf{gw l}_{k}^{(\infty)}$ by our Theorem 4.4. + +4. To go beyond the separation power of $k$ -WL, it is necessary to use GNNs whose layers are represented by expressions of treewidth $> k$ . + +Hence, to design expressive GNNs one needs to define the layers such that treewidth of the resulting TL expressions is large enough. For example, to go beyond 1-WL, $\mathsf{TL}_3$ representable linear algebra operations should be used. Treewidth also sheds light on the open problem from Maron et al. (2019a) where it was asked whether polynomial layers (in $\pmb{A}$ ) increase the separation power. Indeed, consider a layer of the form $\sigma(A^3 \cdot F \cdot W)$ , which raises the adjacency matrix $\pmb{A}$ to the power three. Translated in $\mathsf{TL}(\Omega)$ , layer expressions resemble $\sum_{x_2} \sum_{x_3} \sum_{x_4} E(x_1, x_2) \cdot E(x_2, x_3) \cdot E(x_3, x_4)$ , of treewidth one. Proposition 4.5 tells that the layer is bounded by $\mathsf{wl}_1^{(3)}$ (and in fact by $\mathsf{cr}^{(3)}$ ) in separation power. If instead, the layer is of the form $\sigma(C \cdot F \cdot W)$ where $C_{ij}$ holds the number of cliques containing the edge $ij$ . Then, in $\mathsf{TL}(\Omega)$ we get expressions containing $\sum_{x_2} \sum_{x_3} E(x_1, x_2) \cdot E(x_1, x_3) \cdot E(x_2, x_3)$ . The variables form a 3-clique resulting in expressions of treewidth two. As a consequence, the separation power will be bounded by $\mathsf{wl}_2^{(2)}$ . These examples show that it is not the number of multiplications (in both cases two) that gives power, it is how variables are connected to each other. + +# 6 FUNCTION APPROXIMATION + +We next provide characterizations of functions that can be approximated by TL expressions, when interpreted as functions. We recover and extend results from Azizian & Lelarge (2021) by taking the number of layers of GNNs into account. We also provide new results related to color refinement. + +# 6.1 GENERAL TL APPROXIMATION RESULTS + +We assume that $\mathcal{G}_s$ is a compact space by requiring that vertex labels come from a compact set $K \subseteq \mathbb{R}^{\ell_0}$ . Let $\mathcal{F}$ be a set of functions $f: \mathcal{G}_s \to \mathbb{R}^{\ell_f}$ and define its closure $\overline{\mathcal{F}}$ as all functions $h$ from $\mathcal{G}_s$ for + +which there exists a sequence $f_{1}, f_{2}, \ldots \in \mathcal{F}$ such that $\lim_{i \to \infty} \sup_{G, \boldsymbol{v}} \| f_{i}(G, \boldsymbol{v}) - h(G, \boldsymbol{v}) \| = 0$ for some norm $\|\cdot\|$ . We assume $\mathcal{F}$ to satisfy two properties. First, $\mathcal{F}$ is concatenation-closed: if $f_{1}: \mathcal{G}_{s} \to \mathbb{R}^{p}$ and $f_{2}: \mathcal{G}_{s} \to \mathbb{R}^{q}$ are in $\mathcal{F}$ , then $g := (f_{1}, f_{2}): \mathcal{G}_{s} \to \mathbb{R}^{p+q}: (G, \boldsymbol{v}) \mapsto (f_{1}(G, \boldsymbol{v}), f_{2}(G, \boldsymbol{v}))$ is also in $\mathcal{F}$ . Second, $\mathcal{F}$ is function-closed, for a fixed $\ell \in \mathbb{N}$ : for any $f \in \mathcal{F}$ such that $f: \mathcal{G}_{s} \to \mathbb{R}^{p}$ , also $g \circ f: \mathcal{G}_{s} \to \mathbb{R}^{\ell}$ is in $\mathcal{F}$ for any continuous function $g: \mathbb{R}^{p} \to \mathbb{R}^{\ell}$ . For such $\mathcal{F}$ , we let $\mathcal{F}_{\ell}$ be the subset of functions in $\mathcal{F}$ from $\mathcal{G}_{s}$ to $\mathbb{R}^{\ell}$ . Our next result is based on a generalized Stone-Weierstrass Theorem (Timofte, 2005), also used in Azizian & Lelarge (2021). + +Theorem 6.1. For any $\ell$ , and any set $\mathcal{F}$ of functions, concatenation and function closed for $\ell$ , we have: $\overline{\mathcal{F}_{\ell}} = \{f:\mathcal{G}_s\to \mathbb{R}^{\ell}\mid \rho_s(\mathcal{F})\subseteq \rho_s(f)\}$ . + +This result gives us insight on which functions can be approximated by, for example, a set $\mathcal{F}$ of functions originating from a class of GNNs. In this case, $\overline{\mathcal{F}}_{\ell}$ represent all functions approximated by instances of such a class and Theorem 6.1 tells us that this set corresponds precisely to the set of all functions that are equally or less separating than the GNNs in this class. If, in addition, $\mathcal{F}_{\ell}$ is more separating that CR or $k$ -WL, then we can say more. Let $\mathrm{alg} \in \{\mathrm{cr}^{(t)}, \mathrm{gcr}^{(t)}, \mathrm{vw}|_k^{(t)}, \mathrm{gw}|_k^{(\infty)}\}$ . + +Corollary 6.2. Under the assumptions of Theorem 6.1 and if $\rho(\mathcal{F}_{\ell}) = \rho(\mathrm{alg})$ , then $\overline{\mathcal{F}_{\ell}} = \{f : \mathcal{G}_s \to \mathbb{R}^{\ell} \mid \rho(\mathrm{alg}) \subseteq \rho(f)\}$ . + +The properties of being concatenation and function-closed are satisfied for sets of functions representable in our tensor languages, if $\Omega$ contains all continuous functions $g: \mathbb{R}^p \to \mathbb{R}^\ell$ , for any $p$ , or alternatively, all MLPs (by Lemma 32 in Azizian & Lelarge (2021)). Together with our results in Section 4, the corollary implies that $\mathrm{MPNNs}^{(t)}$ , $1\text{-MPNNs}^{(t)}$ , $k\text{-MPNNs}^{(t)}$ or $k\text{-MPNNs}$ can approximate all functions with equal or less separation power than $\operatorname{cr}^{(t)}$ , $\operatorname{gcr}^{(t)}$ , $\operatorname{vw}|_k^{(t)}$ or $\operatorname{gw}|_k^{(\infty)}$ , respectively. + +Prop. 3.1 also tells that the closure consists of invariant $(s = 0)$ and equivariant $(s > 0)$ functions. + +# 6.2 CONSEQUENCES FOR GNNS + +All our results combined provide a recipe to guarantee that a given function can be approximated by GNN architectures. Indeed, suppose that your class of GNNs is an $\mathrm{MPNN}^{(t)}$ (respectively, $1\text{-MPNN}^{(t)}, k\text{-MPNN}^{(t)}$ or $k\text{-MPNN}$ , for some $k \geq 1$ ). Then, since most classes of GNNs are concatenation-closed and allow the application of arbitrary MLPs, this implies that your GNNs can only approximate functions $f$ that are no more separating than $\mathrm{cr}^{(t)}$ (respectively, $\mathrm{gcr}^{(t)}, \mathrm{vw}|_k^{(t)}$ or $\mathrm{gw}|_k^{(\infty)}$ ). To guarantee that these functions can indeed be approximated, one additionally has to show that your class of GNNs matches the corresponding labeling algorithm in separation power. + +For example, GNNs in $\mathsf{GIN}_{\ell}^{(t)}$ are MPNNs $(^t)$ , and thus $\overline{\mathsf{GIN}_{\ell}^{(t)}}$ contains any function $f: \mathcal{G}_1 \to \mathbb{R}^\ell$ satisfying $\rho_1(\mathsf{cr}^{(t)}) \subseteq \rho_1(f)$ . Similarly, $\mathsf{eGIN}_{\ell}^{(t)}$ s are 1-MPNNs $(^t)$ , so $\overline{\mathsf{eGIN}_{\ell}^{(t)}}$ contains any function satisfying $\rho_1(\mathsf{w}|_1^{(t)}) \subseteq \rho_1(f)$ ; and when extended with a readout layer, their closures consist of functions $f: \mathcal{G}_0 \to \mathbb{R}^\ell$ satisfying $\rho_0(\mathsf{gcr}^{(t)}) = \rho_0(\mathsf{vw}|_1^{(t)}) \subseteq \rho_0(f)$ . Finally, $k$ -FGNNs $(^t)$ s are $k$ -MPNNs $(^t)$ , so $\overline{k\text{-FGNN}_{\ell}^{(t)}}$ consists of functions $f$ such that $\rho_1(\mathsf{vw}|_k^{(t)}) \subseteq \rho_1(f)$ . We thus recover and extend results by Azizian & Lelarge (2021) by including layer information $(t)$ and by treating color refinement separately from 1-WL for vertex embeddings. Furthermore, Theorem 5.1 implies that $\overline{(k + 1)\text{-IGN}_{\ell}}$ consists of functions $f$ satisfying $\rho_1(\mathsf{vw}|_k^{(\infty)}) \subseteq \rho_1(f)$ and $\rho_0(\mathsf{gw}|_k^{(\infty)}) \subseteq \rho_0(f)$ , a case left open in Azizian & Lelarge (2021). + +These results follow from Corollary 6.2, that the respective classes of GNNs can simulate CR or $k$ -WL on either graphs with discrete (Xu et al., 2019; Barceló et al., 2020) or continuous labels (Maron et al., 2019b), and that they are $k$ -MPNNs of the appropriate form. + +# 7 CONCLUSION + +Connecting GNNs and tensor languages allows us to use our analysis of tensor languages to understand the separation and approximation power of GNNs. The number of indices and summation depth needed to represent the layers in GNNs determine their separation power in terms of color refinement and Weisfeiler-Leman tests. The framework of $k$ -MPNNs provides a handy toolbox to understand existing and new GNN architectures, and we demonstrate this by recovering several results about the power of GNNs presented recently in the literature, as well as proving new results. + +# 8 ACKNOWLEDGEMENTS & DISCLOSURE FUNDING + +This work is partially funded by ANID-Millennium Science Initiative Program-Code ICN17_002, Chile. + +# ETHICS STATEMENT + +The results in this paper do not include misleading claims; their correctness is theoretically verified. Related work is accurately represented. + +# REFERENCES + +Mahmoud Abo Khamis, Hung Q. Ngo, and Atri Rudra. FAQ: Questions Asked Frequently. In Proceedings of the 35th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS, pp. 13-28. ACM, 2016. URL https://doi.org/10.1145/2902251.2902280.3, 39, 40, 41 +Srinivas M. Aji and Robert J. McEliece. The generalized distributive law. IEEE Transactions on Information Theory, 46(2):325-343, 2000. URL https://doi.org/10.1109/18.825794.3, 7 +Waiss Azizian and Marc Lelarge. Expressive power of invariant and equivariant graph neural networks. In Proceedings of the 9th International Conference on Learning Representations, ICLR, 2021. URL https://openreview.net/forum?id=lxHgXYN4bwl. 2, 5, 8, 9, 28, 37, 38 +Muhammet Balcilar, Pierre Héroux, Benoit Gaüzère, Pascal Vasseur, Sébastien Adam, and Paul Honeine. Breaking the limits of message passing graph neural networks. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 599-608. PMLR, 2021a. URL http://proceedings.mlr.org. press/v139/balcilar21a.html.1,2,7,14,30,31 +Muhammet Balcilar, Guillaume Renton, Pierre Héroux, Benoit Gaüzère, Sébastien Adam, and Paul Honeine. Analyzing the expressive power of graph neural networks in a spectral perspective. In Proceedings of the 9th International Conference on Learning Representations, ICLR, 2021b. URL https://openreview.net/forum?id=-qh0M9XWxnv.2,8,30,31 +Pablo Barceló, Egor V Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan Pablo Silva. The logical expressiveness of graph neural networks. In Proceedings of the 8th International Conference on Learning Representations, ICLR, 2020. URL https://openreview.net/forum?id=r11Z7AEKvB.3,6,7,9,18,38 +Pablo Barceló, Floris Geerts, Juan L. Reutter, and Maksimilian Ryschkov. Graph neural networks with local graph parameters. In Advances in Neural Information Processing Systems, volume 34, 2021. URL https://proceedings.neurips.cc/paper/2021/bit/ d4d8d1ac7e00e9105775a6b660dd3cbb-Abstract.htm1.28, 29 +Cristian Bodnar, Fabrizio Frasca, Yuguang Wang, Nina Otter, Guido F. Montúfar, Pietro Líó, and Michael M. Bronstein. Weisfeiler and Lehman go topological: Message passing simplicial networks. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 1026-1037. PMLR, 2021. URL http://proceedings.mlr.press/v139/bodnar21a.html.29,30 +Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M. Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. In Graph Representation Learning and Beyond (GRL+) Workshop at the 37th International Conference on Machine Learning, 2020. URL https://arxiv.org/abs/2006.09252.29 +Robert Brijder, Floris Geerts, Jan Van den Bussche, and Timmy Weerwag. On the expressive power of query languages for matrices. ACM TODS, 44(4):15:1-15:31, 2019. URL https://doi.org/10.1145/3331445.1, 14, 30 + +Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. In Proceedings of the 2nd International Conference on Learning Representations, ICLR, 2014. URL https://openreview.net/forum?id=DQNsQf-UsoDBa.30 +Jin-yi Cai, Martin Fürer, and Neil Immerman. An optimal lower bound on the number of variables for graph identifications. Comb., 12(4):389-410, 1992. URL https://doi.org/10.1007/BF01305232.2, 3, 6 +Zhengdao Chen, Soledad Villar, Lei Chen, and Joan Bruna. On the equivalence between graph isomorphism testing and function approximation with GNNs. In Advances in Neural Information Processing Systems, volume 32, 2019. URL https://proceedings.neurips.cc/paper/2019/file/71ee911dd06428a96c143a0b135041a4-Paper.pdf. 2, 28 +Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. Can graph neural networks count substructures? In Advances in Neural Information Processing Systems, volume 33, 2020. URL https://proceedings.neurips.cc/paper/2020/file/75877cb75154206c4e65e76b88a12712-Paper.pdf.8, 32 +Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Lio, and Petar Velickovic. Principal neighbourhood aggregation for graph nets. In Advances in Neural Information Processing Systems, volume 33, 2020. URL https://proceedings.neurips.cc/paper/2020/file/99cad265a1768cc2dd013f0e740300ae-Paper.pdf. 7, 26, 27 +L. Csanky. Fast parallel matrix inversion algorithms. SIAM J. Comput., 5(4):618-623, 1976. URL https://doi.org/10.1137/0205040.31 +Radu Curticapean, Holger Dell, and Daniel Marx. Homomorphisms are a good basis for counting small subgraphs. In Proceedings of the 49th Symposium on Theory of Computing, STOC, pp. 210-223, 2017. URL http://dx.doi.org/10.1145/3055399.3055502.29 +Clemens Damke, Vitalik Melnikov, and Eyke Hüllermeier. A novel higher-order weisfeiler-lehman graph convolution. In Proceedings of The 12th Asian Conference on Machine Learning, ACML, volume 129 of Proceedings of Machine Learning Research, pp. 49-64. PMLR, 2020. URL http://proceedings.mlr.press/v129/damke20a.html.28 +Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, volume 30, 2016. URL https://proceedings.neurips.cc/paper/2016/file/04df4d434d481c5bb723be1b6dflee65-Paper.pdf. 2, 7, 30 +Martin Fürer. Weisfeiler-Lehman refinement requires at least a linear number of iterations. In Proceedings of the 28th International Colloqium on Automata, Languages and Programming, ICALP, volume 2076 of Lecture Notes in Computer Science, pp. 322-333. Springer, 2001. URL https://doi.org/10.1007/3-540-48224-5_27.5 +Floris Geerts. The expressive power of kth-order invariant graph networks. CoRR, abs/2007.12035, 2020. URL https://arxiv.org/abs/2007.12035.8 +Floris Geerts. On the expressive power of linear algebra on graphs. Theory Comput. Syst., 65(1): 179-239, 2021. URL https://doi.org/10.1007/s00224-020-09990-9.1, 14 +Floris Geerts, Filip Mazowiecki, and Guillermo A. Pérez. Let's agree to degree: Comparing graph convolutional networks in the message-passing framework. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 3640-3649. PMLR, 2021a. URL http://proceedings.mlr.press/v139/geerts21a.html.26 +Floris Geerts, Thomas Muñoz, Cristian Riveros, and Domagoj Vrgoc. Expressive power of linear algebra query languages. In Proceedings of the 40th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS, pp. 342-354. ACM, 2021b. URL https://doi.org/10.1145/3452021.3458314.1, 2, 14 + +Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pp. 1263-1272, 2017. URL http://proceedings.mlr.press/v70/gilmer17a/gilmer17a.pdf. 2, 3, 6, 42, 43 +Martin Grohe. The logic of graph neural networks. In Proceedings of the 36th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS, pp. 1-17. IEEE, 2021. URL https://doi.org/10.1109/LICS52264.2021.9470677.3, 5, 6, 17 +William L. Hamilton. Graph representation learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 14(3):1-159, 2020. URL https://doi.org/10.2200/S01045ED1V01Y202009AIM046.1 +William L. Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, volume 30, 2017. URL https://proceedings.neurips.cc/paper/2017/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf. 7, 25 +David K. Hammond, Pierre Vandergheynst, and Rémi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129-150, 2011. ISSN 1063-5203. doi: https://doi.org/10.1016/j.acha.2010.04.005. URL https://www.sciencedirect.com/science/article/pii/S1063520310000552.30 +Neil Immerman and Eric Lander. Describing graphs: A first-order approach to graph canonization. In Complexity Theory Retrospective: In Honor of Juris Hartmanis on the Occasion of His Sixtieth Birthday, pp. 59-81. Springer, 1990. URL https://doi.org/10.1007/978-1-4612-4478-3_5.2 +Nicolas Keriven and Gabriel Peyré. Universal invariant and equivariant graph neural networks. In Advances in Neural Information Processing Systems, volume 32, pp. 7092-7101, 2019. URL https://proceedings.neurips.cc/paper/2019/file/ea9268cb43f55d1d12380fb6ea5bf572-Paper.pdf.2 +Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Representations, ICLR, 2017. URL https://openreview.net/pdf?id=SJU4ayYgl.2, 7, 25 +Ron Levie, Federico Monti, Xavier Bresson, and Michael M. Bronstein. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. IEEE Trans. Signal Process., 67 (1):97-109, 2019. URL https://doi.org/10.1109/TSP.2018.2879624.2, 8, 30, 31 +Haggai Maron, Heli Ben-Hamu, and Yaron Lipman. Open problems: Approximation power of invariant graph networks. In NeurIPS 2019 Graph Representation Learning Workshop, 2019a. URL https://grlearning.github.io/papers/31.pdf. 2, 8, 32 +Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. In Advances in Neural Information Processing Systems, volume 32, 2019b. URL https://proceedings.neurips.cc/paper/2019/file/bb04af0f7ecaee4aaa62035497da1387-Paper.pdf. 2, 7, 8, 9, 27, 28, 32, 35, 36, 37, 38, 39 +Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. In Proceedings of the 7th International Conference on Learning Representations, ICLR, 2019c. URL https://openreview.net/forum?id=Syx72jc9tm.2, 32, 35 +Christian Merkwirth and Thomas Lengauer. Automatic generation of complementary descriptors with molecular graph networks. J. Chem. Inf. Model., 45(5):1159-1168, 2005. URL https://doi.org/10.1021/ci049613b.1 +H. L. Morgan. The generation of a unique machine description for chemical structures-a technique developed at chemical abstracts service. Journal of Chemical Documentation, 5(2):107-113, 1965. URL https://doi.org/10.1021/c160017a018.3 + +Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and Leman go neural: Higher-order graph neural networks. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, pp. 4602-4609, 2019. URL https://doi.org/10.1609/aaai.v33i01.33014602.1, 2, 7, 25, 28 +Christopher Morris, Gaurav Rattan, and Petra Mutzel. Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings. In Advances in Neural Information Processing Systems, volume 33, 2020. URL https://proceedings.neurips.cc//paper/2020/file/f81dee42585b3814de199b2e88757f5c-Paper.pdf. 2, 28 +Martin Otto. Bounded Variable Logics and Counting: A Study in Finite Models, volume 9 of Lecture Notes in Logic. Cambridge University Press, 2017. URL https://doi.org/10.1017/9781316716878.2,5,18 +Martin Otto. Graded modal logic and counting bisimulation. ArXiv, 2019. URL https:// arxiv.org/abs/1910.00039.18 +Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Trans. Neural Networks, 20(1):61-80, 2009. URL https://doi.org/10.1109/TNN.2008.2005605.1 +Vlad Timofte. Stone-Weierstrass theorems revisited. Journal of Approximation Theory, 136(1): 45-59, 2005. URL https://doi.org/10.1016/j.jat.2005.05.004.9,36 +Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In Proceedings of the 6th International Conference on Learning Representations, ICLR, 2018. URL https://openreview.net/forum?id=rJXMpikCZ.27 +Felix Wu, Amauri H. Souza Jr., Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Q. Weinberger. Simplifying graph convolutional networks. In Proceedings of the 36th International Conference on Machine Learning, ICML, volume 97 of Proceedings of Machine Learning Research, pp. 6861-6871. PMLR, 2019. URL http://proceedings.mlr.press/v97/wu19e.html. 7, 26 +Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In Proceedings of the 7th International Conference on Learning Representations, ICLR, 2019. URL https://openreview.net/forum?id=ryGs6iA5Km.1, 2, 7, 9, 23, 25, 38 +Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. In Advances in Neural Information Processing Systems, volume 30, 2017. URL https://proceedings.neurips.cc/paper/2017/file/f22e4747da1aa27e363d86d40ff442fe-Paper.pdf. 23 + +# SUPPLEMENTARY MATERIAL + +# A RELATED WORK CNT'D + +We provide additional details on how the tensor language $\mathsf{TL}(\Omega)$ considered in this paper relates to recent work on other matrix query languages. Closest to $\mathsf{TL}(\Omega)$ is the matrix query language sum-MATLANG (Geerts et al., 2021b) whose syntax is close to that of $\mathsf{TL}(\Omega)$ . There are, however, key differences. First, although sum-MATLANG uses index variables (called vector variables), they all must occur under a summation. In other words, the concept of free index variables is missing, which implies that no general tensors can be represented. In $\mathsf{TL}(\Omega)$ , we can represent arbitrary tensors and the presence of free index variables is crucial to define vertex, or more generally, $k$ -tuple embeddings in the context of GNNs. Furthermore, no notion of summation depth was introduced for sum-MATLANG. In $\mathsf{TL}(\Omega)$ , the summation depth is crucial to assess the separation power in terms of the number of rounds of color refinement and $k$ -WL. And in fact, the separation power of sum-MATLANG was not considered before, and neither are finite variable fragments of sum-MATLANG and connections to color refinement and $k$ -WL studied before. Finally, no other aggregation functions were considered for sum-MATLANG. We detail in Section C.5 that $\mathsf{TL}(\Omega)$ can be gracefully extended to $\mathsf{TL}(\Omega, \Theta)$ for some arbitrary set $\Theta$ of aggregation functions. + +Connections to 1-WL and 2-WL and the separation power of another matrix query language, MATLAB (Brijder et al., 2019) were established in Geerts (2021). Yet, the design of MATLAB is completely different in spirit than that of TL(Ω). Indeed, MATLAB does not have index variables or explicit summation aggregation. Instead, it only supports matrix multiplication, matrix transposition, function applications, and turning a vector into a diagonal matrix. As such, MATLAB can be shown to be included in $\mathsf{TL}_3(\Omega)$ . Similarly as for sum-MATLANG, MATLAB cannot represent general tensors, has no (free) index variables and summation depth is not considered (in view of the absence of an explicit summation). + +We also emphasize that neither for MATLAB nor for sum-MATLANG a guarded fragment was considered. The guarded fragment is crucial to make connections to color refinement (Theorem 4.3). Furthermore, the analysis in terms of the number of index variables, summation depth and treewidth (Theorems 4.1,4.2 and Proposition 4.5), were not considered before in the matrix query language literature. For none of these matrix query languages, approximation results were considered (Section 6.1). + +Matrix query languages are used to assess the expressive power of linear algebra. Balcilar et al. (2021a) use MATLAB and the above mentioned connections to 1-WL and 2-WL, to assess the separation power of GNNs. More specifically, similar to our work, they show that several GNN architectures can be represented in MATLAB, or fragments thereof. As a consequence, bounds on their separation power easily follow. Furthermore, Balcilar et al. (2021a) propose new architectures inspired by special operators in MATLAB. The use of $\mathsf{TL}(\Omega)$ can thus been seen as a continuation of their approach. We note, however, that $\mathsf{TL}(\Omega)$ is more general than MATLAB (which is included in $\mathsf{TL}_3(\Omega)$ ), allows to represent more complex linear algebra computations by means summation (or other) aggregation, and finally, provides insights in the number of iterations needed for color refinement and $k$ -WL. The connection between the number of variables (or treewidth) and $k$ -WL is not present in the work by Balcilar et al. (2021a), neither is the notion of guarded fragment, needed to connect to color refinement. We believe that it is precisely these latter two insights that make the tensor language approach valuable for any GNN designer who wishes to upper bound their GNN architecture. + +# B DETAILS OF SECTION 3 + +# B.1 PROOF OF PROPOSITION 3.1 + +Let $G = (V, E, \operatorname{col})$ be a graph and let $\sigma$ be a permutation of $V$ . As usual, we define $\sigma \star G = (V^{\sigma}, E^{\sigma}, \operatorname{col}^{\sigma})$ as the graph with vertex set $V^{\sigma} := V$ , edge set $vw \in E^{\sigma}$ if and only if $\sigma^{-1}(v)\sigma^{-1}(w) \in E$ , and $\operatorname{col}^{\sigma}(v) := \operatorname{col}(\sigma^{-1}(v))$ . We need to show that for any expression $\varphi(x)$ in $\mathsf{T}\mathsf{L}(\Omega)$ either $[[\varphi, \sigma \star v]]_{\sigma \star G} = [[\varphi, v]]_G$ , or when $\varphi$ has no free index variables, $[[\varphi]]_{\sigma \star G} = [[\varphi]]_G$ . We verify this by a simple induction on the structure of expressions in $\mathsf{T}\mathsf{L}(\Omega)$ . + +- If $\varphi(x_i, x_j) = \mathbf{1}_{x_i \circ p \times x_j}$ , then for a valuation $\nu$ mapping $x_i$ to $v_i$ and $x_j$ to $v_j$ in $V$ : + +$$ +\llbracket \mathbf {1} _ {x _ {i} \mathrm {o p} x _ {j}}, \nu \rrbracket_ {G} = \mathbf {1} _ {v _ {i} \mathrm {o p} v _ {j}} = \mathbf {1} _ {\sigma (v _ {i}) \mathrm {o p} \sigma (v _ {j})} = \llbracket \mathbf {1} _ {x _ {i} \mathrm {o p} x _ {j}}, \sigma \star \nu \rrbracket_ {\sigma \star G}, +$$ + +where we used that $\sigma$ is a permutation. + +- If $\varphi(x_i) = P_\ell(x_i)$ , then for a valuation $\mu$ mapping $x_i$ to $v_i$ in $V$ : + +$$ +\llbracket P _ {\ell}, \mu \rrbracket_ {G} = (\operatorname {c o l} (v _ {i})) _ {\ell} = (\operatorname {c o l} ^ {\sigma} (\sigma (v _ {i})) _ {\ell} = \llbracket P _ {\ell}, \sigma \star \nu \rrbracket_ {\sigma \star G}, +$$ + +where we used the definition of $\mathrm{col}^{\sigma}$ . + +- Similarly, if $\varphi(x_i, x_j) = E(x_i, x_j)$ , then for a valuation $\nu$ assigning $x_i$ to $v_i$ and $x_j$ to $v_j$ : + +$$ +\llbracket \varphi , \nu \rrbracket_ {G} = \mathbf {1} _ {v _ {i} v _ {j} \in E} = \mathbf {1} _ {\sigma (v _ {i}) \sigma (v _ {j}) \in E ^ {\sigma}} = \llbracket \varphi , \sigma \star \nu \rrbracket_ {\sigma \star G}, +$$ + +where we used the definition of $E^{\sigma}$ + +- If $\varphi(\pmb{x}) = \varphi_1(\pmb{x}_1) \cdot \varphi_2(\pmb{x}_2)$ , then for a valuation $\nu$ from $\pmb{x}$ to $V$ : + +$$ +\llbracket \varphi , \nu \rrbracket_ {G} = \llbracket \varphi_ {1}, \nu \rrbracket_ {G} \cdot \llbracket \varphi_ {2}, \nu \rrbracket_ {G} = \llbracket \varphi_ {1}, \sigma \star \nu \rrbracket_ {\sigma \star G} \cdot \llbracket \varphi_ {2}, \sigma \star \nu \rrbracket_ {\sigma \star G} = \llbracket \varphi , \sigma \star \nu \rrbracket_ {\sigma \star G}, +$$ + +where we used the induction hypothesis for $\varphi_{1}$ and $\varphi_{2}$ . The cases $\varphi (\pmb {x}) = \varphi_1(\pmb {x}_1) + \varphi_2(\pmb {x}_2)$ and $\varphi (\pmb {x}) = a\cdot \varphi_{1}(\pmb {x})$ are dealt with in a similar way. + +- If $\varphi(\pmb{x}) = f(\varphi_1(\pmb{x}_1), \dots, \varphi_p(\pmb{x}_p))$ , then + +$$ +\begin{array}{l} [ [ \varphi , \nu ] ] _ {G} = f ([ [ \varphi_ {1}, \nu ] ] _ {G}, \dots , [ [ \varphi_ {p}, \nu ] ] _ {G}) \\ = f \left(\llbracket \varphi_ {1}, \sigma \star \nu \rrbracket_ {\sigma \star G}, \dots , \llbracket \varphi_ {p}, \sigma \star \nu \rrbracket_ {\sigma \star G}\right) \\ = \llbracket \varphi , \sigma \star \nu \rrbracket_ {\sigma \star G}, \\ \end{array} +$$ + +where we used again the induction hypothesis for $\varphi_1, \ldots, \varphi_p$ . + +- Finally, if $\varphi(\pmb{x}) = \sum_{y} \varphi_1(\pmb{x}, y)$ then for a valuation $\nu$ of $\pmb{x}$ to $V$ : + +$$ +\begin{array}{l} \llbracket \varphi , \nu \rrbracket_ {G} = \sum_ {v \in V} \llbracket \varphi_ {1}, \nu [ y \mapsto v ] \rrbracket_ {G} = \sum_ {v \in V} \llbracket \varphi_ {1}, \sigma \star \nu [ y \mapsto v ] \rrbracket_ {\sigma \star G} \\ = \sum_ {v \in V ^ {\sigma}} \llbracket \varphi_ {1}, \sigma \star \nu [ y \mapsto v ] \rrbracket_ {\sigma \star G} = \llbracket \varphi , \sigma \star \nu \rrbracket_ {\sigma \star G}, \\ \end{array} +$$ + +where we used the induction hypothesis for $\varphi_{1}$ and that $V^{\sigma} = V$ because $\sigma$ is a permutation. + +We remark that when $\varphi$ does not contain free index variables, then $\llbracket \varphi ,\nu \rrbracket_{G} = \llbracket \varphi \rrbracket_{G}$ for any valuation $\nu$ , from which invariance follows from the previous arguments. This concludes the proof of Proposition 3.1. + +# C DETAILS OF SECTION 4 + +In the following sections we prove Theorem 4.1, 4.2, 4.3 and 4.4. More specifically, we start by showing these results in the setting that $\mathsf{TL}(\Omega)$ only supports summation aggregation $(\sum_{x}e)$ and in which the vertex-labellings in graphs take values in $\{0,1\}^{\ell}$ . In this context, we introduce classical logics in Section C.1 and recall and extend connections between the separation power of these logics and the separation power of color refinement and $k$ -WL in Section C.2. We connect $\mathsf{TL}(\Omega)$ and logics in Section C.3, to finally obtain the desired proofs in Section C.4. We then show how these results can be generalized in the presence of general aggregation operators in Section C.5, and to the setting where vertex-labellings take values in $\mathbb{R}^{\ell}$ in Section C.6. + +# C.1 CLASSICAL LOGICS + +In what follows, we consider graphs $G = (V_G, E_G, \mathsf{col}_G)$ with $\mathsf{col}_G : V_G \to \{0,1\}^\ell$ . We start by defining the $k$ -variable fragment $C^k$ of first-order logic with counting quantifiers, followed by the definition of the guarded fragment $\mathsf{GC}$ of $C^2$ . Formulae $\varphi$ in $C^k$ are defined over the set $\{x_1, \ldots, x_k\}$ of variables and are formed by the following grammar: + +$$ +\varphi := \left(x _ {i} = x _ {j}\right) \mid E \left(x _ {i}, x _ {j}\right) \mid P _ {s} \left(x _ {i}\right) \mid \neg \varphi \mid \varphi \wedge \varphi \mid \exists^ {\geq m} x _ {i} \varphi , +$$ + +where $i, j \in [k]$ , $E$ is a binary predicate, $P_s$ for $s \in [\ell]$ are unary predicates for some $\ell \in \mathbb{N}$ , and $m \in \mathbb{N}$ . The semantics of formulae in $C^k$ is defined in terms of interpretations relative to a given graph $G$ and a (partial) valuation $\mu: \{x_1, \ldots, x_k\} \to V_G$ . Such an interpretation maps formulae, graphs and valuations to Boolean values $\mathbb{B} := \{\bot, \top\}$ , in a similar way as we did for tensor language expressions. + +More precisely, given a graph $G = (V_G, E_G, \mathrm{col}_G)$ and partial valuation $\mu : \{x_1, \ldots, x_k\} \to V_G$ , we define $[[\varphi, \mu]]_G^{\mathbb{B}} \in \mathbb{B}$ for valuations defined on the free variables in $\varphi$ . That is, we define: + +$$ +\begin{array}{l} \llbracket x _ {i} = x _ {j}, \mu \rrbracket_ {G} ^ {\mathbb {B}} := \text {i f} \mu (x _ {i}) = \mu (x _ {j}) \text {t h e n} \top \text {e l s e} \bot ; \\ \llbracket E \left(x _ {i}, x _ {j}\right), \mu \rrbracket_ {G} ^ {\mathbb {B}} := \text {i f} \mu \left(x _ {i}\right) \mu \left(x _ {j}\right) \in E _ {G} \text {t h e n} \top \text {e l s e} \bot ; \\ \llbracket P _ {s} \left(x _ {i}\right), \mu \rrbracket_ {G} ^ {\mathbb {B}} := \text {i f} \operatorname {c o l} _ {G} \left(\mu \left(x _ {i}\right)\right) _ {s} = 1 \text {t h e n} \top \text {e l s e} \perp ; \\ [ [ \neg \varphi , \mu ] ] _ {G} ^ {\mathbb {B}} := \neg [ [ \varphi , \mu ] ] _ {G} ^ {\mathbb {B}}; \\ \llbracket \varphi_ {1} \wedge \varphi_ {2}, \mu \rrbracket_ {G} ^ {\mathbb {B}} := \llbracket \varphi_ {1}, \mu \rrbracket_ {G} ^ {\mathbb {B}} \wedge \llbracket \varphi_ {2}, \mu \rrbracket_ {G} ^ {\mathbb {B}}; \\ [ [ \exists^ {\geq m} x _ {i} \varphi_ {1}, \mu ] ] _ {G} ^ {\mathbb {B}} := \text {i f} | \{v \in V _ {G} \mid [ [ \varphi , \mu [ x _ {i} \mapsto v) ] ] _ {G} ^ {\mathbb {B}} = \top \} | \geq m \text {t h e n} \top \text {e l s e} \perp . \\ \end{array} +$$ + +In the last expression, $\mu [x_i\mapsto v]$ denotes the valuation $\mu$ modified such that it maps $x_{i}$ to vertex $v$ + +We will also need the guarded fragment GC of $C^2$ in which we only allow equality conditions of the form $x_{i} = x_{i}$ , component expressions of conjunction and disjunction should have the same single free variable, and counting quantifiers can only occur in guarded form: $\exists^{\geq m}x_{2}(E(x_{1},x_{2})\wedge \varphi (x_{2}))$ or $\exists^{\geq m}x_{1}(E(x_{2},x_{1})\wedge \varphi (x_{1}))$ . The semantics of formulae in GC is inherited from formulae in $C^2$ . + +Finally, we will also consider $C_{\infty \omega}^{k}$ , that is, the logic $C^k$ extended with infinitary disjunctions and conjunctions. More precisely, we add to the grammar of formulae the following constructs: + +$$ +\bigvee_ {\alpha \in A} \varphi_ {\alpha} \text {a n d} \bigwedge_ {\alpha \in A} \varphi_ {\alpha} +$$ + +where the index set $A$ can be arbitrary, even containing uncountably many indices. We define $\mathsf{GC}_{\infty \omega}$ in the same way by relaxing the finite variable conditions. The semantics is, as expected: $\mathbb{[}\bigvee_{\alpha \in A}\varphi_{\alpha},\mu \mathbb{]}_{G}^{\mathbb{B}} = \top$ if for at least one $\alpha \in A$ , $\mathbb{[}\varphi_{\alpha},\mu \mathbb{]}_{G}^{\mathbb{B}} = \top$ , and $\mathbb{[}\bigwedge_{\alpha \in A}\varphi_{\alpha},\mu \mathbb{]}_{G}^{\mathbb{B}} = \top$ if for all $\alpha \in A$ , $\mathbb{[}\varphi_{\alpha},\mu \mathbb{]}_{G}^{\mathbb{B}} = \top$ . + +We define the free variables of formulae just as for TL, and similarly, quantifier rank is defined as summation depth (only existential quantifications increase the quantifier rank). For any of the above logics $\mathcal{L}$ we define $\mathcal{L}^{(t)}$ as the set of formulae in $\mathcal{L}$ of quantifier rank at most $t$ . + +To capture the separation power of logics, we define $\rho_{1}\bigl (\mathcal{L}^{(t)}\bigr)$ as the equivalence relation on $\mathcal{G}_1$ defined by + +$$ +\bigl ((G, v), (H, w) \bigr) \in \rho_ {1} \bigl (\mathcal {L} ^ {(t)} \bigr) \Longleftrightarrow \forall \varphi (x) \in \mathcal {L} ^ {(t)}: [ [ \varphi , \mu_ {v} ] ] _ {G} ^ {\mathbb {B}} = [ [ \varphi , \mu_ {w} ] ] _ {H} ^ {\mathbb {B}}, +$$ + +where $\mu_v$ is any valuation such that $\mu(x) = v$ , and likewise for $w$ . The relation $\rho_0$ is defined in a similar way, except that now the relation is only over pairs of graphs, and the characterization is over all formulae with no free variables (also called sentences). Finally, we also use, and define, the relation $\rho_s$ , which relates pairs from $\mathcal{G}_s$ : consisting of a graph and an $s$ -tuple of vertices. The relation is defined as + +$$ +\left(\left(G, \boldsymbol {v}\right), \left(H, \boldsymbol {w}\right)\right) \in \rho_ {s} \left(\mathcal {L} ^ {(t)}\right) \Longleftrightarrow \forall \varphi (\boldsymbol {x}) \in \mathcal {L} ^ {(t)}: \llbracket \varphi , \mu_ {\boldsymbol {v}} \rrbracket_ {G} ^ {\mathbb {B}} = \llbracket \varphi , \mu_ {\boldsymbol {w}} \rrbracket_ {H} ^ {\mathbb {B}}, +$$ + +where $\pmb{x}$ consists of $s$ free variables and $\mu_{\pmb{v}}$ is a valuation assigning the $i$ -th variable of $\pmb{x}$ to the $i$ -th value of $\pmb{v}$ , for any $i \in [s]$ . + +# C.2 CHARACTERIZATION OF SEPARATION POWER OF LOGICS + +We first connect the separation power of the color refinement and $k$ -dimensional Weisfeiler-Leman algorithms to the separation power of the logics we just introduced. Although most of these connections are known, we present them in a bit of a more fine-grained way. That is, we connect the number of rounds used in the algorithms to the quantifier rank of formulae in the above logics. + +Proposition C.1. For any $t \geq 0$ , we have the following identities: + +(1) $\rho_{1}\big(\mathsf{cr}^{(t)}\big) = \rho_{1}\big(\mathsf{G}\mathsf{C}^{(t)}\big)$ and $\rho_0\big(\mathsf{g}\mathsf{cr}^{(t)}\big) = \rho_0\big(\mathsf{gw}|_1^{(t)}\big) = \rho_0\big(\mathsf{C}^{2,(t + 1)}\big);$ +(2) For $k \geq 1$ , $\rho_1(\mathsf{w}|\mathsf{l}_k^{(t)}) = \rho_1(\mathsf{C}^{k + 1,(t)})$ and + +$$ +\rho_ {0} \left(\mathsf {C} ^ {k + 1, (t + k)}\right) \subseteq \rho_ {0} \left(\mathsf {g w l} _ {k} ^ {(t)}\right) \subseteq \rho_ {0} \left(\mathsf {C} ^ {k + 1, (t + 1)}\right). +$$ + +As a consequence, $\rho_0\big(\mathsf{gw}|_k^{(\infty)}\big) = \rho_0\big(C^{k + 1}\big)$ . + +Proof. For (1), the identity $\rho_1(\mathsf{cr}^{(t)}) = \rho_1(\mathsf{GC}^{(t)})$ is known and can be found, for example, in Theorem V.10 in Grohe (2021). The identity $\rho_0(\mathsf{gcr}^{(t)}) = \rho_0(\mathsf{gw}\mathsf{l}_1^{(t)})$ can be found in Proposition V.4 in Grohe (2021). The identity $\rho_0(\mathsf{gw}\mathsf{l}_1^{(t)}) = \rho_0(\mathsf{C}^{2,(t + 1)})$ is a consequence of the inclusion shown in (2) for $k = 1$ . + +For (2), we use that $\rho_k(\mathsf{w}|\_k^{(t)}) = \rho_k(\mathsf{C}^{k + 1,(t)})$ , see e.g., Theorem V.8 in Grohe (2021). We argue that this identity holds for $\rho_1(\mathsf{v}\mathsf{w}|\_k^{(t)}) = \rho_1(\mathsf{C}^{k + 1,(t)})$ . Indeed, suppose that $(G,v)$ and $(H,w)$ are not in $\rho_1(\mathsf{C}^{k + 1,(t)})$ . Let $\varphi(x_1)$ be a formula in $\mathsf{C}^{k + 1,(t)}$ such that $\llbracket \varphi ,v\rrbracket_G^{\mathbb{B}}\neq \llbracket \varphi ,w\rrbracket_H^{\mathbb{B}}$ . Consider the formula $\varphi^{+}(x_{1},\ldots ,x_{k}) = \varphi (x_{1})\wedge \bigwedge_{i = 1}^{k}(x_{1} = x_{i})$ . Then, $\llbracket \varphi^{+},(v,\dots ,v)\rrbracket_G^{\mathbb{B}}\neq \llbracket \varphi^{+},(w,\dots ,w)\rrbracket_H^{\mathbb{B}}$ , and hence $(G,(v,\dots ,v))$ and $(H,(w,\dots ,w))$ are not in $\rho_k(\mathsf{C}^{k + 1,(t)})$ either. This implies that $\mathsf{w}\mathsf{l}_{k}^{(t)}(G,(v,\dots ,v))\neq \mathsf{w}\mathsf{l}_{k}^{(t)}(H,(w,\dots ,w))$ , and thus, by definition, $\mathsf{v}\mathsf{w}\mathsf{l}_{k}^{(t)}(G,v)\neq \mathsf{v}\mathsf{w}\mathsf{l}_{k}^{(t)}(H,w)$ . In other words, $(G,v)$ and $(H,w)$ are not in $\rho_1(\mathsf{v}\mathsf{w}\| _k^{(t)})$ , from which the inclusion $\rho_1(\mathsf{v}\mathsf{w}\| _k^{(t)})\subseteq \rho_1(\mathsf{C}^{k + 1,(t)})$ follows. Conversely, if $(G,v)$ and $(H,w)$ are not in $\rho_1(\mathsf{v}\mathsf{w}\| _k^{(t)})$ , then $\mathsf{w}\mathsf{l}_{k}^{(t)}(G,(v,\dots ,v))\neq \mathsf{w}\mathsf{l}_{k}^{(t)}(H,(w,\dots ,w))$ . As a consequence, $(G,(v,\dots ,v))$ and $(H,(w,\dots ,w))$ are not in $\rho_k(\mathsf{C}^{k + 1,(t)})$ either. Let $\varphi (x_{1},\ldots ,x_{k})$ be a formula in $\mathsf{C}^{k + 1,(t)}$ such that $\llbracket \varphi ,(v,\dots ,v)\rrbracket_G^{\mathbb{B}}\neq \llbracket \varphi ,(w,\dots ,w)\rrbracket_H^{\mathbb{B}}$ . Then it is readily shown that we can convert $\varphi (x_{1},\ldots ,x_{k})$ into a formula $\varphi^{-}(x_{1})$ in $\mathsf{C}^{k + 1,(t)}$ such that $\llbracket \varphi^{-},v\rrbracket_G^{\mathbb{B}}\neq \llbracket \varphi^{-},w\rrbracket_H^{\mathbb{B}}$ , and thus $(G,v)$ and $(H,w)$ are not in $\rho_1(\mathsf{C}^{k + 1,(t)})$ . Hence, we also have the inclusion $\rho_1(\mathsf{v}\mathsf{w}\| _k^{(t)})\supseteq \rho_1(\mathsf{C}^{k + 1,(t)})$ , form which the first identity in (2) follows. + +It remains to show $\rho_0\big(\mathsf{C}^{k + 1,(t + k)}\big)\subseteq \rho_0\big(\mathsf{gw}\mathsf{l}_k^{(t)}\big)\subseteq \rho_0\big(\mathsf{C}^{k + 1,(t + 1)}\big)$ . Clearly, if $(G,H)$ is not in $\rho_0\big(\mathsf{gw}\mathsf{l}_k^{(t)}\big)$ then the multisets of labels $\mathsf{w}\mathsf{l}_k^{(t)}(G,\pmb {v})$ and $\mathsf{w}\mathsf{l}_k^{(t)}(H,\pmb {w})$ differ. It is known that with each label $c$ one can associate a formula $\varphi^c$ in $\mathsf{C}^{k + 1,(t)}$ such that $\llbracket \varphi^c,v\rrbracket_G^\mathbb{B} = \top$ if and only if $\mathsf{w}\mathsf{l}_k^{(t)}(G,\pmb {v}) = c$ . So, if the multisets are different, there must be a $c$ that occurs more often in one multiset than in the other one. This can be detected by a formulae of the form $\exists^{= m}(x_{1},\ldots ,x_{k})\varphi^{c}(x_{1},\ldots ,x_{k})$ which is satisfied if there are $m$ tuples $\pmb{v}$ with label $c$ . It is now easily verified that the latter formula can be converted into a formula in $\mathsf{C}^{k + 1,(t + k)}$ . Hence, the inclusion $\rho_0\big(\mathsf{C}^{k + 1,(t + k)}\big)\subseteq \rho_0\big(\mathsf{gw}\mathsf{l}_k^{(t)}\big)$ follows. + +For $\rho_0\big(\mathsf{gw}|_k^{(t)}\big)\subseteq \rho_0\big(C^{k + 1,(t + 1)}\big)$ , we show that if $(G,H)$ is in $\rho_0\big(\mathsf{gw}|_k^{(t)}\big)$ , then this implies that $[\varphi ,\mu ]_{G}^{\mathbb{B}} = [\varphi ,\mu ]_{H}^{\mathbb{B}}$ for all formulae in $C^{k + 1,(t + 1)}$ and any valuation $\mu$ (notice that $\mu$ is superfluous in this definition when formulas have no free variables). Assume that $(G,H)$ is in $\rho_0(\mathsf{gw}|_k^{(t)})$ . Since any formula of quantifier rank $t + 1$ is a Boolean combination of formulas of less rank or a formula of the form $\varphi = \exists^{\geq m}x_{i}\psi$ where $\psi$ is of quantifier rank $t$ , without loss of generality consider a formula of the latter form, and assume for the sake of contradiction that $[\varphi ,\mu ]_{G}^{\mathbb{B}} = \top$ but $[\varphi ,\mu ]_{H}^{\mathbb{B}} = \bot$ . Since $[\varphi ,\mu ]_{G}^{\mathbb{B}} = \top$ , there must be at least $m$ elements satisfying $\psi$ . More precisely, let $v_{1},\ldots ,v_{p}$ in $G$ be all vertices in $G$ such that for each valuation $\mu [x\mapsto v_i]$ it holds that $[\psi ,\mu [x\mapsto v_i]_{G}^{\mathbb{B}} = \top$ . As mentioned, it must be that $p$ is at least $m$ . Using again the fact that $\rho_k\big(\mathsf{w}|_k^{(t)}\big) = \rho_k\big(C^{k + 1,(t)}\big)$ , we infer that the color $\mathsf{w}|_k^{(t - 1)}(G,(v_i,\dots,v_i))$ is the same, for each such $v_{i}$ . + +Now since $\mathsf{gw}|_k^{(t - 1)}(G) = \mathsf{gw}|_k^{(t - 1)}(H)$ , it is not difficult to see that there must be exactly $p$ vertices $w_{1},\ldots ,w_{p}$ in $H$ such that $\mathsf{wl}|_k^{(t - 1)}(G,(v_i,\dots ,v_i)) = \mathsf{wl}|_k^{(t - 1)}(H,(w_i,\dots ,w_i))$ . Otherwise, it would simply not be the case that the aggregation step of the colors, assigned by $k$ -WL is the same in $G$ and $H$ . By the connection to logic, we again know that for valuation $\mu [x\mapsto w_i]$ it holds that $[\varphi ,\mu [x\mapsto w_i]]_H^{\mathbb{B}} = \top$ . It then follows that $[\varphi ,\mu ]]_H^{\mathbb{B}} = \top$ for any valuation $\mu$ , which was to be shown. + +Finally, we remark that $\rho_0\big(\mathsf{gw}|_k^{(\infty)}\big) = \rho_0\big(\mathsf{C}^{k + 1}\big)$ follows from the preceding inclusions in (2). + +![](images/cb24b31e684eeff21c4eb023df0635986e1a7a047454ed83b42e0d2b682589ef.jpg) + +Before moving to tensor languages, where we will use infinitary logics to simulate expressions in $\mathsf{TL}_k(\Omega)$ and $\mathsf{GTL}(\Omega)$ , we recall that, when considering the separation power of logics, we can freely move between the logics and their infinitary counterparts: + +Theorem C.2. The following identities hold for any $t \geq 0$ , $k \geq 2$ and $s \geq 0$ : + +(1) $\rho_{1}\big(\mathsf{GC}_{\infty \omega}^{(t)}\big) = \rho_{1}\big(\mathsf{GC}^{(t)}\big)$ ; +(2) $\rho_s\big(\mathsf{C}_{\infty \omega}^{k,(t)}\big) = \rho_s\big(\mathsf{C}^{k,(t)}\big).$ + +Proof. For identity (1), notice that we only need to prove that $\rho_{1}\big(\mathsf{GC}^{(t)}\big)\subseteq \rho_{1}\big(\mathsf{GC}_{\infty \omega}^{(t)}\big)$ , the other direction follows directly from the definition. We point out the well-known fact that two tuples $(G,v)$ and $(H,w)$ belong to $\rho_{1}\big(\mathsf{GC}^{(t)}\big)$ if and only if the unravelling of $G$ rooted at $v$ up to depth $t$ is isomorphic to the unravelling of $H$ rooted at $w$ up to root $t$ . Here the unravelling is the infinite tree whose root is the root node, and whose children are the neighbors of the root node (see e.g. Barceló et al. (2020); Otto (2019). Now for the connection with infinitary logic. Assume that the unravellings of $G$ rooted at $v$ and of $H$ rooted at $w$ up to level $t$ are isomorphic, but assume for the sake of contradiction that there is a formula $\varphi (x)$ in $\mathsf{GC}_{\infty \omega}^{(t)}$ such that $\llbracket \varphi ,\mu_v\rrbracket_G^{\mathbb{B}}\neq \llbracket \varphi ,\mu_w\rrbracket_H^{\mathbb{B}}$ , where $\mu_v$ and $\mu_w$ are any valuation mapping variable $x$ to $v$ and $w$ , respectively. Now since $G$ and $H$ are finite graphs, one can construct, from formula $\phi$ , a formula $\phi^\prime$ in $\mathsf{GC}^{(t)}$ such that $\llbracket \psi ,\mu_v\rrbracket_G^{\mathbb{B}}\neq \llbracket \psi ,\mu_w\rrbracket_H^{\mathbb{B}}$ . Notice that this is in contradiction with our assumption that unravellings where isomorphic and therefore indistinguishable by formulae in $\mathsf{GC}^{(t)}$ . To construct $\psi$ , consider an infinitary disjunction $\bigvee_{a\in A}\alpha_{a}$ . Since $G$ and $H$ have a finite number of vertices, and the formulae have a finite number of variables, the number of different valuations from the variables to the vertices in $G$ or $H$ is also finite. Thus, one can replace any extra copy of $\alpha_{a}$ , $\alpha_{a'}$ such that their value is the same in $G$ and $H$ . The final result is a finite disjunction, and the truth value over $G$ and $H$ is equivalent to the original infinitary disjunction. + +For identity (2) we refer to Corollary 2.4 in Otto (2017). + +![](images/c01d7dbcf0974df1301708ed67f21124cc636ed1c0ef1e8f9233f6d486ca3884.jpg) + +# C.3 FROM $\mathsf{TL}(\Omega)$ TO $C_{\infty \omega}^{k}$ AND $\mathsf{GC}_{\infty \omega}$ + +We are now finally ready to make the connection between expressions in $\mathsf{TL}(\Omega)$ and the infinitary logics introduced earlier. + +Proposition C.3. For any expression $\varphi(\pmb{x})$ in $\mathsf{T}\mathsf{L}_k(\Omega)$ and $c \in \mathbb{R}$ , there exists an expression $\tilde{\varphi}^c(\pmb{x})$ in $\mathsf{C}_{\infty \omega}^k$ such that $\llbracket \varphi, \pmb{v} \rrbracket_G = c$ if and only if $\llbracket \tilde{\varphi}^c, \pmb{v} \rrbracket_G^{\mathbb{B}} = \top$ for any graph $G = (V_G, E_G, \mathrm{col}_G)$ in $\mathcal{G}$ and $\pmb{v} \in V_G^k$ . Furthermore, if $\varphi(x) \in \mathsf{GTL}(\Omega)$ then $\tilde{\varphi}^c \in \mathsf{GC}_{\infty \omega}$ . Finally, if $\varphi$ has summation depth $t$ then $\tilde{\varphi}^c$ has quantifier rank $t$ . + +Proof. We define $\tilde{\varphi}^c$ inductively on the structure of expressions in $\mathsf{TL}_k(\Omega)$ . + +- $\varphi(x_i, x_j) := \mathbf{1}_{x_i \circ p} x_j$ . Assume first that op is “ $=$ \”. We distinguish between (a) $i \neq j$ and (b) $i = j$ . For case (a), if $c = 1$ , then we define $\tilde{\varphi}^1(x_i, x_j) := (x_i = x_j)$ , if $c = 0$ , then we define $\tilde{\varphi}^0(x_i, x_j) := \neg (x_i = x_j)$ , and if $c \neq 0, 1$ , then we define $\tilde{\varphi}^c(x_i, x_j) := x_i \neq x_i$ . For case (b), if $c = 1$ , then we define $\tilde{\varphi}^1(x_i, x_j) := (x_i = x_i)$ , and for any $c \neq 1$ , we define $\tilde{\varphi}^c(x_i, x_j) := \neg (x_i = x_i)$ . The case when op is “ $\neq$ ” is treated analogously. +- $\varphi(x_i) := P_\ell(x_i)$ . If $c = 1$ , then we define $\tilde{\varphi}^1(x_i) := P_\ell(x_i)$ , if $c = 0$ , then we define $\tilde{\varphi}^0(x_i) := \neg P_j(x_i)$ . For all other $c$ , we define $\tilde{\varphi}^c(x_i, x_j) := \neg (x_i = x_i)$ . +- $\varphi(x_i, x_j) := E(x_i, x_j)$ . If $c = 1$ , then we define $\tilde{\varphi}^1(x_i, x_j) := E(x_i, x_j)$ , if $c = 0$ , then we define $\tilde{\varphi}^0(x_i, x_j) := \neg E(x_i, x_j)$ . For all other $c$ , we define $\tilde{\varphi}^c(x_i, x_j) := \neg (x_i = x_i)$ . +- $\varphi \coloneqq \varphi_{1} + \varphi_{2}$ . We observe that $\llbracket \varphi, \pmb{v} \rrbracket_{G} = c$ if and only if there are $c_{1}, c_{2} \in \mathbb{R}$ such that $\llbracket \varphi_{1}, \pmb{v} \rrbracket_{G} = c_{1}$ and $\llbracket \varphi_{2}, \pmb{v} \rrbracket_{G} = c_{2}$ and $c = c_{1} + c_{2}$ . Hence, it suffices to define + +$$ +\tilde{\varphi}^{c}:= \bigvee_{\substack{c_{1},c_{2}\in \mathbb{R}\\ c = c_{1} + c_{2}}}\tilde{\varphi}_{1}^{c_{1}}\wedge \tilde{\varphi}_{2}^{c_{2}}, +$$ + +where $\tilde{\varphi}_1^{c_1}$ and $\tilde{\varphi}_2^{c_2}$ are the expressions such that $\llbracket \varphi_1, \pmb{v} \rrbracket_G = c_1$ if and only if $\llbracket \tilde{\varphi}_1^{c_1}, \pmb{v} \rrbracket_G^{\mathbb{B}} = \top$ and $\llbracket \varphi_2, \pmb{v} \rrbracket_G = c_2$ if and only if $\llbracket \tilde{\varphi}_2^{c_2}, \pmb{v} \rrbracket_G^{\mathbb{B}} = \top$ , which exist by induction. + +- $\varphi \coloneqq \varphi_{1} \cdot \varphi_{2}$ . This is case is analogous to the previous one. Indeed, $\llbracket \varphi, \pmb{v} \rrbracket_{G} = c$ if and only if there are $c_{1}, c_{2} \in \mathbb{R}$ such that $\llbracket \varphi_{1}, \pmb{v} \rrbracket_{G} = c_{1}$ and $\llbracket \varphi_{2}, \pmb{v} \rrbracket_{G} = c_{2}$ and $c = c_{1} \cdot c_{2}$ . Hence, it suffices to define + +$$ +\tilde{\varphi}^{c}:= \bigvee_{\substack{c_{1},c_{2}\in \mathbb{R}\\ c = c_{1}\cdot c_{2}}}\tilde{\varphi}_{1}^{c_{1}}\wedge \tilde{\varphi}_{2}^{c_{2}}. +$$ + +- $\varphi \coloneqq a \cdot \varphi_1$ . This is case is again dealt with in a similar way. Indeed, $\llbracket \varphi, \pmb{v} \rrbracket_G = c$ if and only if there is a $c_1 \in \mathbb{R}$ such that $\llbracket \varphi_1, \pmb{v} \rrbracket_G = c_1$ and $c = a \cdot c_1$ . Hence, it suffices to define + +$$ +\tilde{\varphi}^{c}:= \bigvee_{\substack{c_{1}\in \mathbb{R}\\ c = a\cdot c_{1}}}\tilde{\varphi}_{1}^{c_{1}}. +$$ + +- $\varphi := f(\varphi_1, \ldots, \varphi_p)$ with $f: \mathbb{R}^p \to \mathbb{R}$ . We observe that $\llbracket \varphi, \boldsymbol{v} \rrbracket_G = c$ if and only if there are $c_1, \ldots, c_p \in \mathbb{R}$ such that $c = f(c_1, \ldots, c_p)$ and $\llbracket \varphi_i, \boldsymbol{v} \rrbracket_G = c_i$ for $i \in [p]$ . Hence, it suffices to define + +$$ +\tilde{\varphi}^{c}:= \bigvee_{\substack{c_{1},\ldots ,c_{p}\in \mathbb{R}\\ c = f(c_{1},\ldots ,c_{p})}}\tilde{\varphi}_{1}^{c_{1}}\wedge \dots \wedge \tilde{\varphi}_{p}^{c_{p}}. +$$ + +- $\varphi \coloneqq \sum_{x_i} \varphi_1$ . We observe that $\llbracket \varphi, \mu \rrbracket_G = c$ implies that we can partition $V_G$ into $\ell$ parts $V_1, \ldots, V_\ell$ , of sizes $m_1, \ldots, m_\ell$ , respectively, such that $\llbracket \varphi_1, \mu[x_i \to v] \rrbracket_G = c_i$ for each $v \in V_i$ , and such that all $c_i$ 's are pairwise distinct and $c = \sum_{i=1}^\ell c_i \cdot m_i$ . It now suffices to consider the following formula + +$$ +\tilde{\varphi}^{c}:= \bigvee_{\substack{\ell ,m_{1},\ldots ,m_{\ell}\in \mathbb{N}\\ c_{1},\ldots ,c_{\ell}\in \mathbb{R}\\ c = \sum_{i = 1}^{\ell}m_{i}c_{i}}}\bigwedge_{i = 1}^{\ell}\exists^{= m_{i}}x_{i}\tilde{\varphi}_{1}^{c_{i}}\wedge \forall x_{i}\bigvee_{i = 1}^{\ell}\tilde{\varphi}_{1}^{c_{i}}, +$$ + +where $\exists = m_{i}x_{i}\psi$ is shorthand notation for $\exists^{\geq m_i}x_i\psi \land \neg \exists^{\geq m_i + 1}x_i\psi$ , and $\forall x_{i}\psi$ denotes $\neg \exists^{\geq 1}x_{i}\neg \psi$ . + +This concludes the construction of $\tilde{\varphi}^c$ . We observe that we only introduce a quantifiers when $\varphi = \sum_{x_i}\varphi_1$ and hence if we assume by induction that summation depth and quantifier rank are in sync, then if $\varphi_1$ has summation depth $t - 1$ and thus $\tilde{\varphi}_1^c$ has quantifier rank $t - 1$ for any $c\in \mathbb{R}$ , then $\varphi$ has summation depth $t$ , and as can be seen from the definition of $\tilde{\varphi}^c$ , this formula has quantifier rank $t$ , as desired. + +It remains to verify the claim about guarded expressions. This is again verified by induction. The only case requiring some attention is $\varphi(x_1) \coloneqq \sum_{x_2} E(x_1, x_2) \wedge \varphi_1(x_2)$ for which we can define + +$$ +\tilde{\varphi}^{c}:= \bigvee_{\substack{\ell ,m_{1},\ldots ,m_{\ell}\in \mathbb{N}\\ c_{1},\ldots ,c_{\ell}\in \mathbb{R}\\ c = \sum_{i = 1}^{\ell}m_{i}c_{i}\\ m = \sum_{i = 1}^{\ell}m_{i}}}\exists^{= m}x_{2}E(x_{1},x_{2})\wedge \bigwedge_{i = 1}^{\ell}\exists^{= m_{i}}x_{2}E(x_{1},x_{2})\wedge \tilde{\varphi}_{1}^{c_{i}}(x_{2}), +$$ + +which is a formula in GC again only adding one to the quantifier rank of the formulae $\tilde{\varphi}_1^c$ for $c\in \mathbb{R}$ . So also here, we have the one-to-one correspondence between summation depth and quantifier rank. + +# C.4 PROOF OF THEOREM 4.1, 4.2, 4.3 AND 4.4 + +Proposition C.4. We have the following inclusions: For any $t \geq 0$ and any collection $\Omega$ of functions: + +- $\rho_{1}\big(\mathsf{cr}^{(t)}\big)\subseteq \rho_{1}\big(\mathsf{GTL}^{(t)}(\Omega)\big);$ + +$\rho_{1}\left(\mathsf{wV}_{k}^{(t)}\right)\subseteq \rho_{1}\bigl {(}\mathsf{T L}_{k + 1}^{(t)}(\Omega)\bigr);$ and +- $\rho_0\big(\mathsf{gw}|_{k}^{(t)}\big)\subseteq \rho_0\big(\mathsf{TL}_{k + 1}^{(t + 1)}(\Omega)\big).$ + +Proof. We first show the second bullet by contraposition. That is, we show that if $(G,v)$ and $(H,w)$ are not in $\rho_{1}\big(\mathsf{TL}_{k + 1}^{(t)}(\Omega)\big)$ , then neither are they in $\rho_{1}\big(\mathsf{vw}|_{k}^{(t)}\big)$ . Indeed, suppose that there exists an expression $\varphi (x_{1})$ in $\mathsf{TL}_{k + 1}^{(t)}(\Omega)$ such that $\llbracket \varphi ,v\rrbracket_{G} = c\neq c^{\prime} = \llbracket \varphi ,w\rrbracket_{H}$ . From Proposition C.3 we know that there exists a formula $\tilde{\varphi}^{c}$ in $\mathsf{C}_{\infty \omega}^{k + 1,(t)}$ such that $\llbracket \tilde{\varphi}^{c},v\rrbracket_{G}^{\mathbb{B}} = \top$ and $\llbracket \tilde{\varphi}^{c},w\rrbracket_{H}^{\mathbb{B}} = \bot$ . Hence, $(G,v)$ and $(H,w)$ do not belong to $\rho_{1}\big(\mathsf{C}_{\infty \omega}^{k + 1,(t)}\big)$ . Theorem C.2 implies that $(G,v)$ and $(H,w)$ also do not belong to $\rho_{1}\big(\mathsf{C}^{k + 1,(t)}\big)$ . Finally, Proposition C.1 implies that $(G,v)$ and $(H,w)$ do not belong to $\rho_{1}\big(\mathsf{vw}|_{k}^{(t)}\big)$ , as desired. The third bullet is shown in precisely the same, but using the identities for $\rho_0$ rather than $\rho_{1}$ , and $\mathsf{gw}|_{k}^{(t)}$ rather than $\mathsf{vw}|_{k}^{(t)}$ . + +Also the first bullet is shown in the same way, using the connection between $\mathsf{GTL}^{(t)}(\Omega)$ , $\mathsf{GC}_{\infty \omega}^{2,(t)}$ , $\mathsf{GC}^{(t)}$ and $\mathsf{cr}^{(t)}$ , as given by Proposition C.1, Theorem C.2, and Proposition C.3. + +We next show that our tensor languages are also more separating than the color refinement and $k$ -dimensional Weisfeiler-Leman algorithms. + +Proposition C.5. We have the following inclusions: For any $t \geq 0$ and any collection $\Omega$ of functions: + +- $\rho_{1}\big(\mathsf{GTL}^{(t)}(\Omega)\big)\subseteq \rho_{1}\big(\mathsf{cr}^{(t)}\big);$ +$\rho_{1}\bigl (\mathsf{T}\mathsf{L}_{k + 1}^{(t)}(\Omega)\bigr)\subseteq \rho_{1}\bigl (\mathsf{w}\mathsf{l}_{k}^{(t)}\bigr);$ and +$\rho_0\big(\mathsf{T}\mathsf{L}_{k + 1}^{(t + k)}(\Omega)\big)\subseteq \rho_0\big(\mathsf{g}\mathsf{w}\mathsf{I}_k^{(t)}\big).$ + +Proof. For any of these inclusions to hold, for any $\Omega$ , we need to show the inclusion without the use of any functions. We again use the connections between the color refinement and $k$ -dimensional Weisfeiler-Leman algorithms and finite variable logics as stated in Proposition C.1. More precisely, we show for any formula $\varphi(\boldsymbol{x}) \in \mathcal{C}^{k,(t)}$ there exists an expression $\hat{\varphi}(\boldsymbol{x}) \in \mathsf{T}\mathsf{L}_k^{(t)}$ such that for any graph $G$ in $\mathcal{G}$ , $[\varphi,\boldsymbol{v}]_G^{\mathbb{B}} = \top$ implies $[\hat{\varphi},\boldsymbol{v}]_G = 1$ and $[\varphi,\boldsymbol{v}]_G^{\mathbb{B}} = \bot$ implies $[\hat{\varphi},\boldsymbol{v}]_G = 0$ . By appropriately selecting $k$ and $t$ and by observing that when $\varphi(x) \in \mathsf{GC}$ then $\hat{\varphi}(x) \in \mathsf{GTL}$ , the inclusions follow. + +The construction of $\hat{\varphi}(\pmb{x})$ is by induction on the structure of formulae in $\mathbb{C}^k$ . + +- $\varphi \coloneqq (x_{i} = x_{j})$ . Then, we define $\hat{\varphi} \coloneqq \mathbf{1}_{x_i = x_j}$ . +- $\varphi \coloneqq P_{\ell}(x_i)$ . Then, we define $\hat{\varphi} \coloneqq P_{\ell}(x_i)$ . +- $\varphi \coloneqq E(x_{i},x_{j})$ . Then, we define $\hat{\varphi}\coloneqq E(x_i,x_j)$ +- $\varphi \coloneqq \neg \varphi_{1}$ . Then, we define $\hat{\varphi} \coloneqq \mathbf{1}_{x_i = x_i} - \hat{\varphi}_1$ . +- $\varphi \coloneqq \varphi_{1} \wedge \varphi_{2}$ . Then, we define $\hat{\varphi} \coloneqq \hat{\varphi}_{1} \cdot \hat{\varphi}_{2}$ . +- $\varphi := \exists^{\geq m} x_i \varphi_1$ . Consider a polynomial $p(x) := \sum_j a_j x^j$ such that $p(x) = 0$ for $x \in \{0, 1, \ldots, m - 1\}$ and $p(x) = 1$ for $x \in \{m, m + 1, \ldots, n\}$ . Such a polynomial exists by interpolation. Then, we define $\hat{\varphi} := \sum_j a_j (\sum_{x_i} \hat{\varphi}_1)^j$ . + +We remark that we here crucially rely on the assumption that $\mathcal{G}$ contains graphs of fixed size $n$ and that $\mathrm{TL}_k$ is closed under linear combinations and product. Clearly, if $\varphi \in \mathrm{GC}$ , then the above translations results in an expression $\hat{\varphi} \in \mathrm{GTL}(\Omega)$ . Furthermore, the quantifier rank of $\varphi$ is in one-to-one correspondence to the summation depth of $\hat{\varphi}$ . + +We can now apply Proposition C.1. That is, if $(G,v)$ and $(H,w)$ are not in $\rho_{1}\big(\mathsf{cr}^{(t)}\big)$ then by Proposition C.1, there exists a formula $\varphi (x)$ in $\mathsf{GC}^{(t)}$ such that $[\varphi ,v]_{G}^{\mathbb{B}} = \top \neq [\varphi ,w]_{H}^{\mathbb{B}} = \bot$ . We have just shown when we consider $\tilde{\varphi}$ , in $\mathsf{GTL}^{(t)}$ , also $[\tilde{\varphi},v]_G\neq [\tilde{\varphi},w]_H$ holds. Hence, $(G,v)$ and $(H,w)$ + +are not in $\rho_1\big(\mathsf{GTL}^{(t)}(\Omega)\big)$ either, for any $\Omega$ . Hence, $\rho_1\big(\mathsf{GTL}^{(t)}(\Omega)\big) \subseteq \rho_1\big(\mathsf{cr}^{(t)}\big)$ holds. The other bullets are shown in the same way, again by relying on Proposition C.1 and using that we can move from $\mathsf{vw}|_{k}^{(t)}$ and $\mathsf{gw}|_{k}^{(t)}$ to logical formulae, and to expressions in $\mathsf{T}\mathsf{L}_{k+1}^{(t)}$ and $\mathsf{T}\mathsf{L}_{k+1}^{(t+k)}$ , respectively, to separate $(G,v)$ from $(H,w)$ or $G$ from $H$ , respectively. + +Theorems 4.1, 4.2, 4.3 and 4.4 now follow directly from Propositions C.4 and C.5. + +# C.5 OTHER AGGREGATION FUNCTIONS + +As is mentioned in the main paper, our upper bound results on the separation power of tensor languages (and hence also of GNNs represented in those languages) generalize easily when other aggregation functions than summation are used in TL expressions. + +To clarify what we understand by an aggregation function, let us first recall the semantics of summation aggregation. Let $\varphi \coloneqq \sum_{x_i} \varphi_1$ , where $\sum_{x_i}$ represents summation aggregation, let $G = (V_G, E_G, \operatorname{col}_G)$ be a graph, and let $\nu$ be a valuation assigning index variables to vertices in $V_G$ . The semantics is then given by: + +$$ +\llbracket \sum_ {x _ {i}} \varphi_ {1}, \nu \rrbracket_ {G} := \sum_ {v \in V _ {G}} \llbracket \varphi_ {1}, \nu [ x _ {i} \mapsto v ] \rrbracket_ {G}, +$$ + +as explained in Section 3. Semantically, we can alternatively view $\sum_{x_i}\varphi_1$ as a function which takes the sum of the elements in the following multiset of real values: + +$$ +\{\{\llbracket \varphi_ {1}, \nu [ x _ {i} \mapsto v ] \rrbracket_ {G} \mid v \in V _ {G} \} \}. +$$ + +One can now consider, more generally, an aggregation function $F$ as a function which assigns to any multiset of values in $\mathbb{R}$ a single real value. For example, $F$ could be max, min, mean, ... Let $\Theta$ be such a collection of aggregation functions. We next incorporate general aggregation function in tensor language. + +First, we extend the syntax of expressions in $\mathsf{TL}(\Omega)$ by generalizing the construct $\sum_{x_i}\varphi$ in the grammar of $\mathsf{TL}(\Omega)$ expression. More precisely, we define $\mathsf{TL}(\Omega ,\Theta)$ as the class of expressions, formed just like tensor language expressions, but in which two additional constructs, unconditional and conditional aggregation, are allowed. For an aggregation function $F$ we define: + +$$ +\mathsf {a g g r} _ {x _ {j}} ^ {F} (\varphi) \quad \text {a n d} \quad \mathsf {a g g r} _ {x _ {j}} ^ {F} \big (\varphi (x _ {j}) | E (x _ {i}, x _ {j}) \big), +$$ + +where in the latter construct (conditional aggregation) the expression $\varphi(x_j)$ represents a $\mathrm{TL}(\Omega, \Theta)$ expression whose only free variable is $x_j$ . The intuition behind these constructs is that unconditional aggregation $\mathsf{aggr}_{x_j}^F(\varphi)$ allows for aggregating, using aggregate function $F$ , over the values of $\varphi$ where $x_j$ ranges unconditionally over all vertices in the graph. In contrast, for conditional aggregation $\mathsf{aggr}_{x_j}^F\left(\varphi(x_j) \mid E(x_i, x_j)\right)$ , aggregation by $F$ of the values of $\varphi(x_j)$ is conditioned on the neighbors of the vertex assigned to $x_i$ . That is, the vertices for $x_j$ range only among the neighbors of the vertex assigned to $x_i$ . + +More specifically, the semantics of the aggregation constructs is defined as follows: + +$$ +\llbracket \mathsf {a g g r} _ {x _ {j}} ^ {F} (\varphi), \nu \rrbracket_ {G} := F \left(\{\{\llbracket \varphi , \nu [ x _ {j} \mapsto v ] \rrbracket_ {G} \mid v \in V _ {G} \} \}\right) \in \mathbb {R}. +$$ + +$$ +\llbracket \mathsf {a g g r} _ {x _ {j}} ^ {F} \big (\varphi (x _ {j}) \mid E (x _ {i}, x _ {j}) \big), \nu \rrbracket_ {G} := F \left(\{\{\llbracket \varphi , \nu [ x _ {j} \mapsto v ] \rrbracket_ {G} \mid v \in V _ {G}, (\nu (x _ {i}), v) \in E _ {G} \} \}\right) \in \mathbb {R}. +$$ + +We remark that we can also consider aggregations functions $F$ over multisets of values in $\mathbb{R}^{\ell}$ for some $\ell \in \mathbb{N}$ . This requires extending the syntax with $\mathsf{aggr}_{x_j}^F (\varphi_1,\ldots ,\varphi_\ell)$ for unconditional aggregation and with $\mathsf{aggr}_{x_j}^F\big(\varphi_1(x_j),\dots ,\varphi_\ell (x_j)\mid E(x_i,x_j)\big)$ for conditional aggregation. The semantics is as expected: $F(\{\{((\llbracket \varphi_{1},\nu [x_{j}\mapsto v])\rrbracket_{G},\ldots ,\llbracket \varphi_{\ell},\nu [x_{j}\mapsto v]\rrbracket_{G})\mid v\in V_{G}\} \})\in \mathbb{R}$ and $F(\{\{((\llbracket \varphi_{1},\nu [x_{j}\mapsto v])\rrbracket_{G},\ldots ,\llbracket \varphi_{\ell},\nu [x_{j}\mapsto v]\rrbracket_{G})\mid v\in V_{G},(\nu (x_{i}),v)\in E_{G}\} \})\in \mathbb{R}$ . + +The need for considering conditional and unconditional aggregation separately is due to the use of arbitrary aggregation functions. Indeed, suppose that one uses an aggregation function $F$ for which $0 \in \mathbb{R}$ is a neutral value. That is, for any multiset $X$ of real values, the equality $F(X) = F(X \oplus \{0\})$ holds. For example, the summation aggregation function satisfies this property. We then observe: + +$$ +\llbracket \operatorname {a g g r} _ {x _ {j}} ^ {F} (\varphi (x _ {j}) \mid E (x _ {i}, x _ {j})), \nu \rrbracket_ {G} = F (\{\{\llbracket \varphi , \nu [ x _ {j} \mapsto v ] \rrbracket \mid v \in V _ {G}, (\nu (x _ {i}), v) \in E _ {G} \} \}) +$$ + +$$ +\begin{array}{l} = F \left(\left\{\left[ \llbracket \varphi \cdot E \left(x _ {i}, x _ {j}\right), \nu \left[ x _ {j} \mapsto v \right] \rrbracket \mid v \in V _ {G} \right\} \right\}\right) \\ = \llbracket \operatorname {a g g r} _ {x _ {j}} ^ {F} (\varphi (x _ {j}) \cdot E (x _ {i}, x _ {j})) , \nu \rrbracket_ {G}. \\ \end{array} +$$ + +In other words, unconditional aggregation can simulate conditional aggregation. In contrast, when 0 is not a neutral value of the aggregation function $F$ , conditional and unconditional aggregation behave differently. Indeed, in such cases $\mathsf{aggr}_{x_j}^F (\varphi (x_j)\mid E(x_i,x_j))$ and $\mathsf{aggr}_{x_j}^F (\varphi (x_j)\cdot E(x_i,x_j))$ may evaluate to different values, as illustrated in the following example. + +As aggregation function $F$ we take the average $\mathrm{avg}(X) \coloneqq \frac{1}{|X|} \sum_{x \in X} x$ for multisets $X$ of real values. We remark that 0's in $X$ contribute to the size of $X$ and hence 0 is not a neutral element of avg. Now, let us consider the expressions + +$$ +\varphi_ {1} (x _ {i}) := \operatorname {a g g r} _ {x _ {j}} ^ {\operatorname {a v g}} (\mathbf {1} _ {x _ {j} = x _ {j}} \cdot E (x _ {i}, x _ {j})) \text {a n d} \varphi_ {2} (x _ {i}) := \operatorname {a g g r} _ {x _ {j}} ^ {\operatorname {a v g}} (\mathbf {1} _ {x _ {j} = x _ {j}} \mid E (x _ {i}, x _ {j})). +$$ + +Let $\nu$ be such that $\nu(x_{i}) = v$ . Then, $\mathbb{[}\varphi_{1}, \nu\mathbb{]}_{G}$ results in applying the average to the multiset $\{\{\mathbf{1}_{w = w} \cdot E(v, w) \mid w \in V_{G}\}\}$ which includes the value 1 for every $w \in N_{G}(v)$ and a 0 for every non-neighbor $w \notin N_{G}(v)$ . In other words, $\mathbb{[}\varphi_{1}, \nu\mathbb{]}_{G}$ results in $|N_{G}(v)| / |V_{G}|$ . In contrast, $\mathbb{[}\varphi_{2}, \nu\mathbb{]}_{G}$ results in applying the average to the multiset $\{\{\mathbf{1}_{w = w} \mid w \in V_{G}, (v, w) \in E_{G}\}\}$ . In other words, this multiset only contains the value 1 for each $w \in N_{G}(v)$ , ignoring any information about the non-neighbors of $v$ . In other words, $\mathbb{[}\varphi_{2}, \nu\mathbb{]}_{G}$ results in $|N_{G}(v)| / |N_{G}(v)| = 1$ . Hence, conditional and unconditional aggregation behave differently for the average aggregation function. + +This said, one could alternative use a more general variant of conditional aggregation of the form $\mathbb{aggr}_{x_j}^F (\varphi |\psi)$ with as semantics $\llbracket\mathsf{agg}r_{x_j}^F (\varphi |\psi),\nu \rrbracket_G:= F\big(\{\{\llbracket\varphi ,\nu [x_j\to v]\rrbracket_G\mid v\in V_G,\llbracket\psi ,\nu [x_j\to v]\rrbracket_G\neq 0\}\}\big)$ where one creates a multiset only for those valuations $\nu [x_j\rightarrow v]$ for which the condition $\psi$ evaluates to a non-zero value. This general form of aggregation includes conditional aggregation, by replacing $\psi$ with $E(x_{i},x_{j})$ and restricting $\varphi$ , and unconditional aggregation, by replacing $\psi$ with the constant function 1, e.g., $\mathbf{1}_{x_j = x_j}$ . In order not to overload the syntax of TL expressions, we will not discuss this general form of aggregation further. + +The notion of free index variables for expressions in $\mathsf{T L}(\Omega ,\Theta)$ is defined as before, where now $\mathrm{free}(\mathrm{aggr}_{x_j}^F (\varphi))\coloneqq \mathrm{free}(\varphi)\setminus \{x_j\}$ , and where $\mathrm{free}(\mathrm{aggr}_{x_j}^F (\varphi (x_j)\mid E(x_i,x_j))\coloneqq \{x_i\}$ (recall that free $(\varphi (x_{j})) = \{x_{j}\}$ in conditional aggregation). Moreover, summation depth is replaced by the notion of aggregation depth, $\operatorname {agd}(\varphi)$ , defined in the same way as summation depth except that $\operatorname {agd}(\operatorname {aggr}_{x_j}^F (\varphi))\coloneqq \operatorname {agd}(\varphi) + 1$ and $\operatorname {agd}(\operatorname {aggr}_{x_j}^F (\varphi (x_j)\mid E(x_i,x_j))\coloneqq \operatorname {agd}(\varphi) + 1$ . Similarly, the fragments $\mathsf{T L}_k(\Omega ,\Theta)$ and its aggregation depth restricted fragment $\mathsf{T L}_k^{(t)}(\Omega ,\Theta)$ are defined as before, using aggregation depth rather than summation depth. + +For the guarded fragment, $\mathsf{GTL}(\Omega ,\Theta)$ , expressions are now restricted such that aggregations must occur only in the form $\mathsf{aggr}_{x_j}^F (\varphi (x_j)\mid E(x_i,x_j))$ , for $i,j\in [2]$ . In other words, aggregation only happens on multisets of values obtained from neighboring vertices. + +We now argue that our upper bound results on the separation power remain valid for the extension $\mathsf{TL}(\Omega, \Theta)$ of $\mathsf{TL}(\Omega)$ with arbitrary aggregation functions $\Theta$ . + +Proposition C.6. We have the following inclusions: For any $t \geq 0$ , any collection $\Omega$ of functions and any collection $\Theta$ of aggregation functions: + +- $\rho_{1}\big(\mathsf{cr}^{(t)}\big)\subseteq \rho_{1}\big(\mathsf{GTL}^{(t)}(\Omega ,\Theta)\big);$ +- $\rho_{1}\big(\mathsf{wW}|_{k}^{(t)}\big)\subseteq \rho_{1}\big(\mathsf{T}\mathsf{L}_{k + 1}^{(t)}(\Omega ,\Theta)\big);$ and +- $\rho_0\big(\mathsf{gwl}_k^{(t)}\big)\subseteq \rho_0\big(\mathsf{T}\mathsf{L}_{k + 1}^{(t + 1)}(\Omega ,\Theta)\big).$ + +Proof. It suffices to show that Proposition C.3 also holds for expressions in the fragments of $\mathsf{TL}(\Omega, \Theta)$ considered. In particular, we only need to revise the case of summation aggregation (that is, $\varphi := \sum_{x_i} \varphi_1$ ) in the proof of Proposition C.3. Indeed, let us consider the more general case when one of the two aggregating functions are used. + +$\varphi \coloneqq \mathrm{aggr}_{x_i}^F (\varphi_1)$ . We then define + +$$ +\tilde {\varphi} ^ {c} := \bigvee_ {\ell \in \mathbb {N}} \bigvee_ {(m _ {1}, \ldots , m _ {\ell}) \in \mathbb {N} ^ {\ell}} \bigvee_ {(c, c _ {1}, \ldots , c _ {\ell}) \in \mathcal {C} (m _ {1}, \ldots , m _ {\ell}, F)} \bigwedge_ {s = 1} ^ {\ell} \exists^ {= m _ {s}} x _ {i} \tilde {\varphi} _ {1} ^ {c _ {s}} \wedge \forall x _ {i} \bigvee_ {s = 1} ^ {\ell} \tilde {\varphi} _ {1} ^ {c _ {s}}, +$$ + +where $\mathcal{C}(m_1,\ldots ,m_\ell ,F)$ now consists of all $(c,c_{1},\dots ,c_{\ell})\in \mathbb{R}^{\ell +1}$ such that + +$$ +c = F\Big(\{\underbrace{c_{1},\ldots,c_{1}}_{m_{1}\text{times}},\ldots ,\underbrace{c_{\ell},\ldots,c_{\ell}}_{m_{\ell}\text{times}}\} \Big). +$$ + +$\varphi \coloneqq \mathsf{aggr}_{x_i}^F (\varphi_1(x_i)\mid E(x_j,x_i))$ . We then define + +$$ +\begin{array}{l} \tilde{\varphi}^{c}:= \bigvee_{\ell \in \mathbb{N}}\bigvee_{(m_{1},\ldots ,m_{\ell})\in \mathbb{N}^{\ell}}\bigvee_{(c,c_{1},\ldots ,c_{\ell})\in \mathcal{C}(m_{1},\ldots ,m_{\ell},F)}\exists^{= m}x_{i} E(x_{j},x_{i})\wedge \\ \bigwedge_ {s = 1} ^ {\ell} \exists^ {= m _ {s}} x _ {i} E (x _ {j}, x _ {i}) \wedge \tilde {\varphi} _ {1} ^ {c _ {s}} (x _ {i}) \\ \end{array} +$$ + +where $\mathcal{C}(m_1,\ldots ,m_\ell ,F)$ again consists of all $(c,c_{1},\dots ,c_{\ell})\in \mathbb{R}^{\ell +1}$ such that + +$$ +c = F\Big(\{\underbrace{c_{1},\ldots,c_{1}}_{m_{1}\text{times}},\ldots ,\underbrace{c_{\ell},\ldots,c_{\ell}}_{m_{\ell}\text{times}}\} \Big)\text{and} m = \sum_{s = 1}^{\ell}m_{s}. +$$ + +It is readily verified that $\llbracket \mathsf{aggr}_{x_i}^F (\varphi),\pmb {v}\rrbracket_G = c$ iff $\llbracket \tilde{\varphi}^c,\pmb {v}\rrbracket_G^{\mathbb{B}} = \top$ , and $\llbracket \mathsf{aggr}_{x_i}^F (\varphi (x_i)\mid E(x_j,x_i)),\pmb {v}\rrbracket_G = c$ iff $\llbracket \tilde{\varphi}^{c},\pmb {v}\rrbracket_{G}^{\mathbb{B}} = \top$ , as desired. + +For the guarded case, we note that the expression $\tilde{\varphi}^c$ above yields a guarded expression as long conditional aggregation is used of the form $\mathsf{aggr}_{x_i}^F (\varphi (x_i)\mid E(x_j,x_i))$ with $i,j\in [2]$ , so we can reuse the argument in the proof of Proposition C.3 for the guarded case. + +We will illustrate later on (Section D) that this generalization allows for assessing the separation power of GNNs that use a variety of aggregation functions. + +The choice of supported aggregation functions has, of course, an impact on the ability of $\mathrm{TL}(\Omega ,\Theta)$ to match color refinement or the $k$ -WL procedures in separation power. The same holds for GNNs, as shown by Xu et al. (2019). And indeed, the proof of Proposition C.5 relies on the presence of summation aggregation. We note that most lower bounds on the separation power of GNNs in terms of color refinement or the $k$ -WL procedures assume summation aggregation since summation suffices to construct injective sum-decomposable functions on multisets (Xu et al., 2019; Zaheer et al., 2017), which are used to simulate color refinement and $k$ -WL. A more in-depth analysis of lower bounding GNNs with less expressive aggregation functions, possibly using weaker versions of color refinement and $k$ -WL is left as future work. + +# C.6 GENERALIZATION TO GRAPHS WITH REAL-VALUED VERTEX LABELS + +We next consider the more general setting in which $\operatorname{col}_G: V_G \to \mathbb{R}^\ell$ for some $\ell \in \mathbb{N}$ . That is, vertices in a graph can carry real-valued vectors. We remark that no changes to neither the syntax nor the semantics of TL expressions are needed, yet note that $\llbracket P_s(x), \nu \rrbracket_G := \operatorname{col}_G(\nu)_s$ is now an element in $\mathbb{R}$ rather than 0 or 1, for each $s \in [\ell]$ . + +A first observation is that the color refinement and $k$ -WL procedures treat each real value as a separate label. That is, two values that differ only by any small $\epsilon > 0$ , are considered different. The proofs of Theorem 4.1, 4.2, 4.3 and 4.4 rely on connections between color refinement and $k$ -WL and the finite variable logics GC and $C^{k+1}$ , respectively. In the discrete context, the unary predicates $P_s(x)$ used in the logical formulas indicate which label vertices have. That is, $[P_s, v]_G^{\mathbb{B}} = \top$ iff $\operatorname{col}_G(v)_s = 1$ . To accommodate for real values in the context of separation power, these logics now need to be able to differentiate between different labels, that is, different real numbers. We therefore + +extend the unary predicates allowed in formulas. More precisely, for each dimension $s \in [\ell]$ , we now have uncountably many predicates of the form $P_{s,r}$ , one for each $r \in \mathbb{R}$ . In any formula in GC or $C^{k+1}$ only a finite number of such predicates may occur. The Boolean semantics of these new predicates is as expected: + +$$ +\llbracket P _ {s, r} (x), \nu \rrbracket_ {G} ^ {\mathbb {B}} := \text {i f} \operatorname {c o l} _ {G} (\mu (x _ {i})) _ {s} = r \text {t h e n} \top \text {e l s e} \bot . +$$ + +In other words, in our logics, we can now detect which real-valued labels vertices have. Although, in general, the introduction of infinite predicates may cause problems, we here consider a specific setting in which the vertices in a graph have a unique label. This is commonly assumed in graph learning. Given this, it is easily verified that all results in Section C.2 carry over, where all logics involved now use the unary predicates $P_{s,r}$ with $s \in [\ell]$ and $r \in \mathbb{R}$ . + +The connection between TL and logics also carries over. First, for Proposition C.3 we now need to connect TL expressions, that use a finite number of predicates $P_{s}$ , for $s \in [\ell]$ , with the extended logics having uncountably many predicates $P_{s,r}$ , for $s \in [\ell]$ and $r \in \mathbb{R}$ , at their disposal. It suffices to reconsider the case $\varphi(x_{i}) = P_{s}(x_{i})$ in the proof of Proposition C.3. More precisely, $[[P_{s}(x_{i}), \nu]]_{G}$ can now be an arbitrary value $c \in \mathbb{R}$ . We now simply define $\tilde{\varphi}^{c}(x_{i}) := P_{s,c}(x_{i})$ . By definition $[[P_{s}(x_{i}), \nu]]_{G} = c$ if and only if $[[P_{s,c}(x_{i}), \nu]]_{G}^{\mathbb{B}} = \top$ , as desired. + +The proof for the extended version of proposition C.5 now needs a slightly different strategy, where we build the relevant TL formula after we construct the contrapositive of the Proposition. Let us first show how to construct a TL formula that is equivalent to a logical formula on any graph using only labels in a specific (finite) set $R$ of real numbers. + +In other words, given a set $R$ of real values, we show that for any formula $\varphi(\pmb{x}) \in C^{k,(t)}$ using unary predicates $P_{s,r}$ such that $r \in R$ , we can construct the desired $\hat{\varphi}$ . As mentioned, we only need to reconsider the case $\varphi(x_i) := P_{s,r}(x_i)$ . We define + +$$ +\hat {\varphi} := \frac {1}{\prod_ {r ^ {\prime} \in R , r \neq r ^ {\prime}} r - r ^ {\prime}} \prod_ {r ^ {\prime} \in R, r \neq r ^ {\prime}} (P _ {s} (x _ {i}) - r ^ {\prime} \mathbf {1} _ {x _ {i} = x _ {i}}). +$$ + +Then, $\llbracket \hat{\varphi},\nu \rrbracket_G$ evaluates to + +$$ +\frac {\prod_ {r ^ {\prime} \in R , r \neq r ^ {\prime}} (r - r ^ {\prime})}{\prod_ {r ^ {\prime} \in R , r \neq r ^ {\prime}} (r - r ^ {\prime})} = \left\{ \begin{array}{l l} 1 & [ [ P _ {s, r}, \nu ] ] = \top \\ 0 & [ [ P _ {s, r}, \nu ] ] = \bot \end{array} \right.. +$$ + +Indeed, if $\llbracket P_{s,r},\nu \rrbracket = \top$ , then $\operatorname{col}_G(v)_s = r$ and hence $\llbracket P_s,v\rrbracket_G = r$ , resulting in the same nominator and denominator in the above fraction. If $\llbracket P_{s,r},\nu \rrbracket = \bot$ , then $\operatorname{col}_G(v)_s = r'$ for some value $r' \in R$ with $r \neq r'$ . In this case, the nominator in the above fraction becomes zero. We remark that this revised construction still results in a guarded TL expression, when the input logical formula is guarded as well. + +Coming back to the proof of the extended version of Proposition C.5, let us show the proof for the fact that $\rho_{1}\big(\mathsf{GTL}^{(t)}(\Omega)\big) \subseteq \rho_{1}\big(\mathsf{cr}^{(t)}\big)$ , the other two items being analogous. Assume that there is a pair $(G,v)$ and $(H,w)$ which is not in $\rho_{1}\big(\mathsf{cr}^{(t)}\big)$ . Then, by Proposition C.1, applied on graphs with real-valued labels, there exists a formula $\varphi(x)$ in $\mathsf{GC}^{(t)}$ such that $\mathbb{[}\varphi,v\mathbb{]}_{G}^{\mathbb{B}} = \top \neq \mathbb{[}\varphi,w\mathbb{]}_{H}^{\mathbb{B}} = \bot$ . We remark that $\varphi(x)$ uses finitely many $P_{s,r}$ predicates. Let $R$ be the set of real values used in both $G$ and $H$ (and $\varphi(x)$ ). We note that $R$ is finite. We invoke the construction sketched above, and obtain a formula $\hat{\varphi}$ in $\mathsf{GTL}^{(t)}$ such that $\mathbb{[}\tilde{\varphi},v\mathbb{]}_{G} \neq \mathbb{[}\tilde{\varphi},w\mathbb{]}_{H}$ . Hence, $(G,v)$ and $(H,w)$ is not in $\rho_{1}\big(\mathsf{GTL}^{(t)}(\Omega)\big)$ either, for any $\Omega$ , which was to be shown. + +# D DETAILS OF SECTION 5 + +We here provide some additional details on the encoding of layers of GNNs in our tensor languages, and how, as a consequence of our results from Section 4, one obtains a bound on their separation power. This section showcases that it is relatively straightforward to represent GNNs in our tensor languages. Indeed, often, a direct translation of the layers, as defined in the literature, suffices. + +# D.1 COLOR REFINEMENT + +We start with GNN architectures related to color refinement, or in other words, architectures which can be represented in our guarded tensor language. + +GraphSage. We first consider a "basic" GNN, that is, an instance of GraphSage (Hamilton et al., 2017) in which sum aggregation is used. The initial features are given by $\pmb{F}^{(0)} = (\pmb{f}_1^{(0)},\dots,\pmb{f}_{d_0}^{(0)})$ where $\pmb{f}_i^{(0)} \in \mathbb{R}^{n\times 1}$ is a hot-one encoding of the $i$ th vertex label in $G$ . We can represent the initial embedding easily in GTL(0), without the use of any summation. Indeed, it suffices to define $\varphi_i^{(0)}(x_1) := P_i(x_1)$ for $i \in [d_0]$ . We have $F_{vj}^{(0)} = [[\varphi_j^{(0)},v]]_G$ for $j \in [d_0]$ , and thus the initial features can be represented by simple expressions in GTL(0). + +Assume now, by induction, that we can also represent the features computed by a basic GNN in layer $t - 1$ . That is, let $F^{(t - 1)}\in \mathbb{R}^{n\times d_{t - 1}}$ be those features and for each $i\in [d_{t - 1}]$ let $\varphi_i^{(t - 1)}(x_1)$ be expressions in $\mathrm{GTL}^{(t - 1)}(\sigma)$ representing them. We assume that, for each $i\in [d_{t - 1}]$ , $F_{vi}^{(t - 1)} = [[\varphi_i^{(t - 1)},v]]_G$ . We remark that we assume that a summation depth of $t - 1$ is needed for layer $t - 1$ . + +Then, in layer $t$ , a basic GNN computes the next features as + +$$ +\boldsymbol {F} ^ {(t)} := \sigma \left(\boldsymbol {F} ^ {(t - 1)} \cdot \boldsymbol {V} ^ {(t)} + \boldsymbol {A} \cdot \boldsymbol {F} ^ {(t - 1)} \cdot \boldsymbol {W} ^ {(t)} + \boldsymbol {B} ^ {(t)}\right), +$$ + +where $\mathbf{A} \in \mathbb{R}^{n \times n}$ is the adjacency matrix of $G$ , $\mathbf{V}^{(t)}$ and $\mathbf{W}^{(t)}$ are weight matrices in $\mathbb{R}^{d_{t-1} \times d_t}$ , $\mathbf{B}^{(t)} \in \mathbb{R}^{n \times d_t}$ is a (constant) bias matrix consisting of $n$ copies of $\mathbf{b}^{(t)} \in \mathbb{R}^{d_t}$ , and $\sigma$ is some activation function. We can simply use the following expressions $\varphi_j^{(t)}(x_1)$ , for $j \in [d_t]$ : + +$$ +\sigma \left(\left(\sum_ {i = 1} ^ {d _ {t - 1}} V _ {i j} ^ {(t)} \cdot \varphi_ {i} ^ {(t - 1)} (x _ {1})\right) + \sum_ {x _ {2}} \left(E \left(x _ {1}, x _ {2}\right) \cdot \left(\sum_ {i = 1} ^ {d _ {t - 1}} W _ {i j} ^ {(t)} \cdot \varphi_ {i} ^ {(t - 1)} (x _ {2})\right)\right) + b _ {j} ^ {(t)} \cdot \mathbf {1} _ {x _ {1} = x _ {1}}\right). +$$ + +Here, $W_{ij}^{(t)}, V_{ij}^{(t)}$ and $b_{j}^{(t)}$ are real values corresponding the weight matrices and bias vector in layer $t$ . These are expressions in $\mathrm{GTL}^{(t)}(\sigma)$ since the additional summation is guarded, and combined with the summation depth of $t - 1$ of $\varphi_{i}^{(t - 1)}$ , this results in a summation depth of $t$ for layer $t$ . Furthermore, $F_{vi}^{(t)} = \llbracket \varphi_{i}^{(t)}, v \rrbracket_{G}$ , as desired. If we denote by $\mathsf{bGNN}^{(t)}$ the class of $t$ -layered basic GNNs, then our results imply + +$$ +\rho_ {1} \left(\mathsf {c r} ^ {(t)}\right) \subseteq \rho_ {1} \left(\left(\mathsf {G T L} ^ {(t)} (\Omega)\right)\right) \subseteq \rho_ {1} \left(\mathsf {b G N N} ^ {(t)}\right), +$$ + +and thus the separation power of basic GNNs is bounded by the separation power of color refinement. We thus recover known results by Xu et al. (2019) and Morris et al. (2019). + +Furthermore, if one uses a readout layer in basic GNNs to obtain a graph embedding, one typically applies a function $\mathsf{ro}:\mathbb{R}^{d_t}\to \mathbb{R}^{d_t}$ in the form of $\mathsf{ro}\bigl (\sum_{v\in V_G}\pmb {F}_v^{(t)}\bigr)$ , in which aggregation takes places over all vertices of the graph. This corresponds to an expression in $\mathsf{T L}_{2}^{(t + 1)}(\sigma ,\mathsf{ro})$ .. $\varphi_j\coloneqq$ $\mathsf{ro}_j\big(\sum_{x_1}\varphi_j^{(t - 1)}(x_1)\big)$ , where $\mathsf{ro}_j$ is the projection of the readout function on the $j$ the coordinate. We note that this is indeed not a guarded expression anymore, and thus our results tell that + +$$ +\rho_ {0} \left(\mathbf {g c r} ^ {(t)}\right) \subseteq \rho_ {0} \left(\mathsf {T L} _ {2} ^ {(t + 1)} (\Omega)\right) \subseteq \rho_ {0} \left(\mathsf {b G N N} ^ {(t)} + \text {r e a d o u t}\right). +$$ + +More generally, GraphSage allows for the use of general aggregation functions $F$ on the multiset of features of neighboring vertices. To cast the corresponding layers in $\mathsf{TL}(\Omega)$ , we need to consider the extension $\mathsf{TL}(\Omega, \Theta)$ with an appropriate set $\Theta$ of aggregation functions, as described in Section C.5. In this way, we can represent layer $t$ by means of the following expressions $\varphi_j^{(t)}(x_1)$ , for $j \in [d_t]$ . + +$$ +\sigma \left(\left(\sum_ {i = 1} ^ {d _ {t - 1}} V _ {i j} ^ {(t)} \cdot \varphi_ {i} ^ {(t - 1)} (x _ {1})\right) + \sum_ {i = 1} ^ {d _ {t - 1}} W _ {i j} ^ {(t)} \cdot \mathsf {a g g r} _ {x _ {2}} ^ {F} \Big (\varphi_ {i} ^ {(t - 1)} (x _ {2}) \mid E (x _ {1}, x _ {2}) \Big) + b _ {j} ^ {(t)} \cdot \mathbf {1} _ {x _ {1} = x _ {1}}\right), +$$ + +which is now an expression in $\mathrm{GTL}^{(t)}(\{\sigma\}, \Theta)$ and hence the bound in terms of $t$ iterations of color refinement carries over by Proposition C.6. Here, $\Theta$ simply consists of the aggregation functions used in the layers in GraphSage. + +GCNs. Graph Convolution Networks (GCNs) (Kipf & Welling, 2017) operate alike basic GNNs except that a normalized Laplacian $D^{-1/2}(I + A)D^{-1/2}$ is used to aggregate features, instead of the adjacency matrix $A$ . Here, $D^{-1/2}$ is the diagonal matrix consisting of reciprocal of the square root of the vertex degrees in $G$ plus 1. The initial embedding $F^{(0)}$ is just as before. We + +use again $d_t$ to denote the number of features in layer $t$ . In layer $t > 0$ , a GCN computes $\pmb{F}^{(t)} \coloneqq \sigma(\pmb{D}^{-1/2}(\pmb{I} + \pmb{A})\pmb{D}^{-1/2} \cdot \pmb{F}^{(t-1)}\pmb{W}^{(t)} + \pmb{B}^{(t)})$ . If, in addition to the activation function $\sigma$ we add the function $\frac{1}{\sqrt{x+1}}: \mathbb{R} \to \mathbb{R}: x \mapsto \frac{1}{\sqrt{x+1}}$ to $\Omega$ , we can represent the GCN layer, as follows. For $j \in [d_t]$ , we define the $\mathrm{GTL}^{(t+1)}(\sigma, \frac{1}{\sqrt{x+1}})$ expressions + +$$ +\begin{array}{l} \varphi_ {j} ^ {(t)} (x _ {1}) := \sigma \left(f _ {1 / \sqrt {x + 1}} \left(\sum_ {x _ {2}} E (x _ {1}, x _ {2})\right) \cdot \left(\sum_ {i = 1} ^ {d _ {t - 1}} W _ {i j} ^ {(t)} \cdot \varphi_ {i} ^ {(t - 1)} (x _ {1})\right) \cdot f _ {1 / \sqrt {x + 1}} \left(\sum_ {x _ {2}} E (x _ {1}, x _ {2})\right) \right. \\ \left. \right. + f _ {1 / \sqrt {x + 1}} \left(\sum_ {x _ {2}} E \left(x _ {1}, x _ {2}\right)\right) \cdot \left(\sum_ {x _ {2}} E \left(x _ {1}, x _ {2}\right) \cdot f _ {1 / \sqrt {x + 1}} \left(\sum_ {x _ {1}} E \left(x _ {2}, x _ {1}\right)\right) \cdot \left(\sum_ {i = 1} ^ {d _ {t - 1}} W _ {i j} ^ {(t)} \cdot \varphi_ {i} ^ {(t - 1)} \left(x _ {2}\right)\right)\right), \\ \end{array} +$$ + +where we omitted the bias vector for simplicity. We again observe that only guarded summations are needed. However, we remark that in every layer we now add two the overall summation depth, since we need an extra summation to compute the degrees. In other words, a $t$ -layered GCN correspond to expressions in $\mathrm{GTL}^{(2t)}(\sigma, \frac{1}{\sqrt{x + 1}})$ . If we denote by $\mathrm{GCN}^{(t)}$ the class of $t$ -layered GCNs, then our results imply + +$$ +\rho_ {1} \left(\operatorname {c r} ^ {(2 t)}\right) \subseteq \rho_ {1} \left(\operatorname {G T L} ^ {(2 t)} (\Omega)\right) \subseteq \rho_ {1} \left(\operatorname {G C N} ^ {(t)}\right). +$$ + +We remark that another representation can be provided, in which the degree computation is factored out (Geerts et al., 2021a), resulting in a better upper bound $\rho_{1}\big(\mathsf{cr}^{(t + 1)}\big)\subseteq \rho_{1}\big(\mathsf{GCN}^{(t)}\big)$ . In a similar way as for basic GNNs, we also have $\rho_0\big(\mathsf{gcr}^{(t + 1)}\big)\subseteq \rho_0\big(\mathsf{GCN}^{(t)} + \mathsf{readout}\big)$ . + +SGCs. As an other example, we consider a variation of Simple Graph Convolutions (SGCs) (Wu et al., 2019), which use powers the adjacency matrix and only apply a non-linear activation function at the end. That is, $\pmb{F} \coloneqq \sigma(\pmb{A}^p \cdot \pmb{F}^{(0)} \cdot \pmb{W})$ for some $p \in \mathbb{N}$ and $\pmb{W} \in \mathbb{R}^{d_0 \times d_1}$ . We remark that SGCs actually use powers of the normalized Laplacian, that is, $\pmb{F} \coloneqq \sigma((\pmb{D}^{-1/2}(\pmb{I} + \pmb{A}_G)\pmb{D}^{-1/2}))^p \cdot \pmb{F}^{(0)} \cdot \pmb{W})$ but this only incurs an additional summation depth as for GCNs. We focus here on our simpler version. It should be clear that we can represent the architecture in $\mathsf{T}\mathsf{L}_{p+1}^{(p)}(\Omega)$ by means of the expressions: + +$$ +\varphi_ {j} ^ {(t)} (x _ {1}) := \sigma \left(\sum_ {x _ {2}} \dots \sum_ {x _ {p + 1}} \prod_ {k = 1} ^ {p} E (x _ {k}, x _ {k + 1}) \cdot \left(\sum_ {i = 1} ^ {d _ {0}} W _ {i j} \cdot \varphi_ {i} ^ {(0)} (x _ {p + 1})\right)\right), +$$ + +for $j \in [d_1]$ . A naive application of our results would imply an upper bound on their separation power by $p$ -WL. We can, however, use Proposition 4.5. Indeed, it is readily verified that these expressions have a treewidth of one, because the variables form a path. And indeed, when for example, $p = 3$ , we can equivalently write $\varphi_j^{(t)}(x_1)$ as + +$$ +\sigma \left(\sum_ {x _ {2}} E (x _ {1}, x _ {2}) \cdot \left(\sum_ {x _ {1}} E (x _ {2}, x _ {1}) \cdot \left(\sum_ {x _ {2}} E (x _ {1}, x _ {2}) \cdot \left(\sum_ {i = 1} ^ {d _ {0}} W _ {i j} \cdot \varphi_ {i} ^ {(0)} (x _ {2})\right)\right)\right)\right), +$$ + +by reordering the summations and reusing index variables. This holds for arbitrary $p$ . We thus obtain guarded expressions in $\mathrm{GTL}^{(p)}(\sigma)$ and our results tell that $t$ -layered SGCs are bounded by $\mathrm{cr}^{(p)}$ for vertex embeddings, and by $\mathrm{gcr}^{(p)}$ for SGCs + readout. + +Principal Neighbourhood Aggregation. Our next example is a GNN in which different aggregation functions are used: Principal Neighborhood Aggregation (PNA) is an architecture proposed by Corso et al. (2020) in which aggregation over neighboring vertices is done by means of mean, stdv, max and min, and this in parallel. In addition, after aggregation, three different scalers are applied. Scalers are diagonal matrices whose diagonal entries are a function of the vertex degrees. Given the features for each vertex $v$ computed in layer $t - 1$ , that is, $F_{v}^{(t - 1)} \in \mathbb{R}^{1 \times \ell}$ , a PNA computes $v$ 's new features in layer $t$ in the following way (see layer definition (8) in (Corso et al., 2020)). First, vectors $G_{v}^{(t)} \in \mathbb{R}^{1 \times 4\ell}$ are computed such that + +$$ +G _ {v j} ^ {(t)} = \left\{ \begin{array}{l l} \mathsf {m e a n} \left(\{\{\mathsf {m l p} _ {j} (F _ {w:} ^ {(t - 1)}) | w \in N _ {G} (v) \} \}\right) & \text {f o r 1 \leq j \leq \ell} \\ \mathsf {s t d v} \left(\{\{\mathsf {m l p} _ {j} (F _ {w:} ^ {(t - 1)}) | w \in N _ {G} (v) \} \}\right) & \text {f o r \ell + 1 \leq j \leq 2 \ell} \\ \mathsf {m a x} \left(\{\{\mathsf {m l p} _ {j} (F _ {w:} ^ {(t - 1)}) | w \in N _ {G} (v) \} \}\right) & \text {f o r 2 \ell + 1 \leq j \leq 3 \ell} \\ \mathsf {m i n} \left(\{\{\mathsf {m l p} _ {j} (F _ {w:} ^ {(t - 1)}) | w \in N _ {G} (v) \} \}\right) & \text {f o r 3 \ell + 1 \leq j \leq 4 \ell}, \end{array} \right. +$$ + +where $\mathsf{mlp}_j:\mathbb{R}^\ell \to \mathbb{R}$ is the projection of an MLP $\mathsf{mlp}:\mathbb{R}^{\ell}\to \mathbb{R}^{\ell}$ on the $j$ th coordinate. Then, three different scalars are applied. The first scalar is simply the identity, the second two scalars $s_1$ and $s_2$ depend on the vertex degrees. As such, vectors $\pmb{H}_{v:}^{(t)}\in \mathbb{R}^{12\ell}$ are constructed as follows: + +$$ +H _ {v j} ^ {(t)} = \left\{ \begin{array}{l l} H _ {v j} ^ {(t)} & \text {f o r 1 \leq j \leq 4 \ell} \\ s _ {1} (\deg_ {G} (v)) \cdot H _ {v j} ^ {(t)} & \text {f o r 4 \ell + 1 \leq j \leq 8 \ell} \\ s _ {2} (\deg_ {G} (v)) \cdot H _ {v j} ^ {(t)} & \text {f o r 8 \ell + 1 \leq j \leq 1 2 \ell}, \end{array} \right. +$$ + +where $s_1$ and $s_2$ are functions from $\mathbb{R} \rightarrow \mathbb{R}$ (see (Corso et al., 2020) for details). Finally, the new vertex embedding is obtained as + +$$ +\boldsymbol {F} _ {v:} ^ {(t)} = \operatorname {m l p} ^ {\prime} (\boldsymbol {H} _ {v:} ^ {(t)}) +$$ + +for some MLP $\mathsf{mlp}': \mathbb{R}^{12\ell} \to \mathbb{R}^{\ell}$ . The above layer definition translates naturally into expressions in $\mathsf{T L}(\Omega, \Theta)$ , the extension of $\mathsf{T L}(\Omega)$ with aggregate functions (Section C.5). Indeed, suppose that for each $j \in [\ell]$ we have $\mathsf{T L}(\Omega, \Theta)$ expressions $\varphi_j^{(t-1)}(x_1)$ such that $\llbracket \varphi_j^{(t-1)}, v \rrbracket_G = F_{vj}^{(t-1)}$ for any vertex $v$ . Then, $G_{vj}^{(t)}$ simply corresponds to the guarded expressions + +$$ +\psi_ {j} ^ {(t)} (x _ {1}) := \mathsf {a g g r} _ {x _ {2}} ^ {\mathsf {m e a n}} (\mathsf {m l p} _ {j} (\varphi_ {1} ^ {(t - 1)} (x _ {2}), \ldots , \varphi_ {\ell} ^ {(t - 1)} (x _ {2})) \mid E (x _ {1}, x _ {2})), +$$ + +for $1 \leq j \leq \ell$ , and similarly for the other components of $G_{v:}^{(t)}$ using the respective aggregation functions, $\operatorname{stdv}$ , $\max$ and $\min$ . Then, $H_{vj}^{(t)}$ corresponds to + +$$ +\xi_ {j} ^ {(t)} (x _ {1}) = \left\{ \begin{array}{l l} \psi_ {j} ^ {(t)} (x _ {1}) & \text {f o r 1 \leq j \leq 4 \ell} \\ s _ {1} (\mathsf {a g g r} _ {x _ {2}} ^ {\mathsf {s u m}} (\mathbf {1} _ {x _ {2} = x _ {2}} \mid E (x _ {1}, x _ {2})) \cdot \psi_ {j} ^ {(t)} (x _ {1}) & \text {f o r 4 \ell + 1 \leq j \leq 8 \ell} \\ s _ {2} (\mathsf {a g g r} _ {x _ {2}} ^ {\mathsf {s u m}} (\mathbf {1} _ {x _ {2} = x _ {2}} \mid E (x _ {1}, x _ {2})) \cdot \psi_ {j} ^ {(t)} (x _ {1}) & \text {f o r 8 \ell + 1 \leq j \leq 1 2 \ell}, \end{array} \right. +$$ + +where we use summation aggregation to compute the degree information used in the functions in the scalars $s_1$ and $s_2$ . And finally, + +$$ +\varphi_ {j} ^ {(t)} := \mathsf {m l p} _ {j} ^ {\prime} \left(\xi_ {1} ^ {(t)} \left(x _ {1}\right), \dots , \xi_ {1 2 \ell} ^ {(t)} \left(x _ {1}\right)\right) +$$ + +represents $F_{vj}^{(t)}$ . We see that all expressions only use two index variables and aggregation is applied in a guarded way. Furthermore, in each layer, the aggregation depth increases with one. As such, a $t$ -layered PNA can be represented in $\mathrm{GTL}^{(t)}(\Omega, \Theta)$ , where $\Omega$ consists of the MLPs and functions used in scalars, and $\Theta$ consists of sum (for computing vertex degrees), and mean, stdv, max and min. Proposition C.6 then implies a bound on the separation power by $\mathrm{cr}^{(t)}$ . + +Other example. In the same way, one can also easily analyze GATs (Velickovic et al., 2018) and show that these can be represented in $\mathrm{GTL}(\Omega)$ as well, and thus bounds by color refinement can be obtained. + +# D.2 $k$ -DIMENSIONAL WEISFEILER-LEMAN TESTS + +We next discuss architectures related to the $k$ -dimensional Weisfeiler-Leman algorithms. For $k = 1$ , we discussed the extended GINs in the main paper. We here focus on arbitrary $k \geq 2$ . + +Folklore GNNs. We first consider the "Folklore" GNNs or $k$ -FGNNs for short (Maron et al., 2019b). For $k \geq 2$ , $k$ -FGNNs compute a tensors. In particular, the initial tensor $\mathbf{F}^{(0)}$ encodes $\mathrm{atp}_k(G, \boldsymbol{v})$ for each $\boldsymbol{v} \in V_G^k$ . We can represent this tensor by the following $k^2 (\ell + 2)$ expressions in $\mathsf{T}\mathsf{L}_{k}^{(0)}$ : + +$$ +\varphi_ {r, s, j} ^ {(0)} (x _ {1}, \ldots , x _ {k}) := \left\{ \begin{array}{l l} \mathbf {1} _ {x _ {r} = x _ {s}} \cdot P _ {j} (x _ {r}) & \text {f o r} j \in [ \ell ] \\ E (x _ {r}, x _ {s}) & \text {f o r} j = \ell + 1 , \\ \mathbf {1} _ {x _ {r} = x _ {s}} & \text {f o r} j = \ell + 2 \end{array} \right. +$$ + +for $r,s\in [k]$ and $j\in [\ell +2]$ . We note: $\llbracket \varphi_{r,s,j}^{(0)},(v_1,\ldots ,v_k)\rrbracket_G = F_{v_1,\ldots ,v_k,r,s,j}^{(0)}$ for all $(r,s,j)\in [k]^2\times [\ell +2]$ , as desired. We let $\tau_0\coloneqq [k]^2\times [\ell +2]$ and set $d_0 = k^2\times (\ell +2)$ . + +Then, in layer $t$ , a $k$ -FGNN computes a tensor + +$$ +\mathbf {F} _ {v _ {1}, \dots , v _ {k}, \bullet} ^ {(t)} := \mathsf {m l p} _ {0} ^ {(t)} \big (\mathbf {F} _ {v _ {1}, \dots , v _ {k}, \bullet} ^ {(t - 1)}, \sum_ {w \in V _ {G}} \prod_ {s = 1} ^ {k} \mathsf {m l p} _ {s} ^ {(t)} \big (\mathbf {F} _ {v _ {1}, \dots , v _ {s - 1}, w, v _ {s + 1}, \dots , v _ {k}, \bullet} ^ {(t - 1)} \big) \big), +$$ + +where $\mathsf{mlp}_s^{(t)}:\mathbb{R}^{d_{t - 1}}\to \mathbb{R}^{d_t'}$ , for $s\in [k]$ , and and $\mathsf{mlp}_0^{(t)}:\mathbb{R}^{d_{t - 1}\times d_t'}\to \mathbb{R}^{d_t}$ are MLPs. We here use to denote combinations of indices in $\tau_d$ for $\mathbf{F}^{(t)}$ and in $\tau_{d - 1}$ for $\mathbf{F}^{(t - 1)}$ + +Let $\mathbf{F}^{(t - 1)}\in \mathbb{R}^{n^k\times d_{t - 1}}$ be the tensor computed by an $k$ -FGNN in layer $t - 1$ . Assume that for each tuple of elements $j$ in $\tau_{d_{t - 1}}$ we have an expression $\varphi_j^{(t - 1)}(x_1,\ldots ,x_k)$ satisfying $\llbracket \varphi_j^{(t - 1)},(v_1,\dots ,v_k)\rrbracket_G = F_{v_1,\dots ,v_k,j}^{(t - 1)}$ and such that it is an expression in $\mathsf{T}\mathsf{L}_{k + 1}^{(t - 1)}(\Omega)$ . That is, we need $k + 1$ index variables and a summation depth of $t - 1$ to represent layer $t - 1$ . + +Then, for layer $t$ , for each $\pmb{j} \in \tau_{d_t}$ , it suffices to consider the expression + +$$ +\begin{array}{l} \varphi_ {\boldsymbol {j}} ^ {(t)} (x _ {1}, \ldots , x _ {k}) := \mathsf {m} | \mathsf {p} _ {0, \boldsymbol {j}} ^ {(t)} \left(\left(\varphi_ {\boldsymbol {i}} ^ {(t - 1)} (x _ {1}, \ldots , x _ {k})\right) _ {\boldsymbol {i} \in \tau_ {d _ {t - 1}}}, \right. \\ \sum_ {x _ {k + 1}} \prod_ {s = 1} ^ {k} \mathsf {m l p} _ {s, j} ^ {(t)} \left(\left(\varphi_ {\boldsymbol {i}} ^ {(t - 1)} (x _ {1}, \dots , x _ {s - 1}, x _ {k + 1}, x _ {s + 1}, \dots , x _ {k})\right) _ {\boldsymbol {i} \in \tau_ {d _ {t - 1}}}\right), \\ \end{array} +$$ + +where $\mathsf{mlp}_{o,j}^{(t)}$ and $\mathsf{mlp}_{s,j}^{(t)}$ are the projections of the MLPs on the $j$ -coordinates. We remark that we need $k + 1$ index variables, and one extra summation is needed. We thus obtain expressions in $\mathsf{T}\mathsf{L}_{k + 1}^{(t)}(\Omega)$ for the $t$ th layer, as desired. We remark that the expressions are simple translations of the defining layer definitions. Also, in this case, $\Omega$ consists of all MLPs. When a $k$ -FGNN is used for vertex embeddings, we now simply add to each expression a factor $\prod_{s = 1}^{k}\mathbf{1}_{x_1 = x_s}$ . As an immediate consequence of our results, if we denote by $k$ -FGNN the class of $t$ -layered $k$ -FGNNs, then for vertex embeddings: + +$$ +\rho_ {1} \left(\mathsf {w w l} _ {k} ^ {(t)}\right) \subseteq \rho_ {1} \left(\mathsf {T L} _ {k + 1} ^ {(t)} (\Omega)\right) \subseteq \rho_ {1} \left(k - \mathsf {F G N N} ^ {(t)}\right) +$$ + +in accordance with the known results from Azizian & Lelarge (2021). When used for graph embeddings, an aggregation layer over all $k$ -tuples of vertices is added, followed by the application of an MLP. This results in expressions with no free index variables, and of summation depth $t + k$ , where the increase with $k$ stems from the aggregation process over all $k$ -tuples. In view of our results, for graph embeddings: + +$$ +\rho_ {0} \left(\mathrm {g w l} _ {k} ^ {(\infty)}\right) \subseteq \rho_ {0} \left(\mathrm {T L} _ {k + 1} (\Omega)\right) \subseteq \rho_ {0} (k - F G N N) +$$ + +in accordance again with Azizian & Lelarge (2021). We here emphasize that the upper bounds in terms of $k$ -WL are obtained without the need to know how $k$ -WL works. Indeed, one can really just focus on casting layers in the right tensor language! + +We remark that Azizian & Lelarge (2021) define vertex embedding $k$ -FGNNs in a different way. Indeed, for a vertex $v$ , its embedding is obtained by aggregating of all $(k - 1)$ tuples in the remaining coordinates of the tensors. They define $\mathsf{vw}_k$ accordingly. From the tensor language point of view, this corresponds to the addition of $k - 1$ to the summation depth. Our results indicate that we lose the connection between rounds and layers, as in Azizian & Lelarge (2021). This is the reason why we defined vertex embedding $k$ -FGNNs in a different way and can ensure a correspondence between rounds and layers for vertex embeddings. + +Other higher-order examples. It is readily verified that $t$ -layered $k$ -GNNs (Morris et al., 2019) can be represented in $\mathsf{TL}_{k+1}^{(t)}(\Omega)$ , recovering the known upper bound by $\mathsf{wL}_k^{(t)}$ (Morris et al., 2019). It is an equally easy exercise to show that 2-WL-convolutions (Damke et al., 2020) and Ring-GNNs (Chen et al., 2019) are bounded by 2-WL, by simply writing their layers in $\mathsf{TL}_3(\Omega)$ . The invariant graph networks ( $k$ -IGNs) (Maron et al., 2019b) will be treated in Section E, as their representation in $\mathsf{TL}_{k+1}(\Omega)$ requires some work. + +# D.3 AUGMENTED GNNS + +Higher-order GNN architectures such as $k$ -GNNs, $k$ -FGNNs and $k$ -IGNs, incur a substantial cost in terms of memory and computation (Morris et al., 2020). Some recent proposals infuse more efficient GNNs with higher-order information by means of some pre-processing step. We next show that the tensor language approach also enables to obtain upper bounds on the separation power of such "augmented" GNNs. + +We first consider $\mathcal{F}$ -MPNNs (Barceló et al., 2021) in which the initial vertex features are augmented with homomorphism counts of rooted graph patterns. More precisely, let $P^r$ be a connected rooted + +graph (with root vertex $r$ ), and consider a graph $G = (V_G, E_G, \mathrm{col}_G)$ and vertex $v \in V_G$ . Then, $\mathrm{hom}(P^r, G^v)$ denotes the number of homomorphism from $P$ to $G$ , mapping $r$ to $v$ . We recall that a homomorphism is an edge-preserving mapping between vertex sets. Given a collection $\mathcal{F} = \{P_1^r, \ldots, P_\ell^r\}$ of rooted patterns, an $\mathcal{F}$ -MPNN runs an MPNN on the augmented initial vertex features: + +$$ +\tilde {\mathbf {F}} _ {v:} ^ {(0)} := \left(\mathbf {F} _ {v:} ^ {(0)}, \operatorname {h o m} \left(P _ {1} ^ {r}, G ^ {v}\right), \dots , \operatorname {h o m} \left(P _ {\ell} ^ {r}, G ^ {v}\right)\right). +$$ + +Now, take any GNN architecture that can be cast in $\mathrm{GTL}(\Omega)$ or $\mathrm{TL}_2(\Omega)$ and assume, for simplicity of exposition, that a $t$ -layer GNN corresponds to expressions in $\mathrm{GTL}^{(t)}(\Omega)$ or $\mathrm{TL}_2^{(t)}(\Omega)$ . In order to analyze the impact of the augmented features, one only needs to revise the expressions $\varphi_j^{(0)}(x_1)$ that represent the initial features. In the absence of graph patterns, $\varphi_j^{(0)}(x_1) \coloneqq P_j(x_1)$ , as we have seen before. By contrast, to represent $\tilde{\pmb{F}}_{vj}^{(0)}$ we need to cast the computation of $\mathrm{hom}(P_i^r,G^v)$ in TL. Assume that the graph pattern $P_{i}$ consists of $p$ vertices and let us identify the vertex set with $[p]$ . Furthermore, without loss of generality, we assume that vertex "1" is the root vertex in $P_{i}$ . To obtain $\mathrm{hom}(P_i^r,G^v)$ we need to create an indicator function for the graph pattern $P_{i}$ and then count how many times this indicator value is equal to one in $G$ . The indicator function for $P_{i}$ is simply given by the expression $\prod_{uv\in E_{P_i}}E(x_u,x_v)$ . Then, counting just pours down to summation over all index variables except the one for the root vertex. More precisely, if we define + +$$ +\varphi_ {P _ {i}} (x _ {1}) := \sum_ {x _ {2}} \dots \sum_ {x _ {p}} \prod_ {u v \in E _ {P _ {i}}} E (x _ {u}, x _ {v}), +$$ + +then $\llbracket \varphi_{P_i}, v \rrbracket_G = \mathrm{hom}(P_i^r, G^v)$ . This encoding results in an expression in $\mathsf{T}\mathsf{L}_p$ . However, it is well-known that we can equivalently write $\varphi_{P_i}(x_1)$ as an expression $\tilde{\varphi}_{P_i}(x_1)$ in $\mathsf{T}\mathsf{L}_{k+1}$ where $k$ is the treewidth of the graph $P_i$ . As such, our results imply that $\mathcal{F}$ -MPNNs are bounded in separation power by $k$ -WL where $\bar{k}$ is the maximal treewidth of graphs in $\mathcal{F}$ . We thus recover the known upper bound as given in Barceló et al. (2021) using our tensor language approach. + +Another example of augmented GNN architectures are the Graph Substructure Networks (GSNs) (Bouritsas et al., 2020). By contrast to $\mathcal{F}$ -MPNNs, subgraph isomorphism counts rather than homomorphism counts are used to augment the initial features. At the core of a GSN thus lies the computation of $\mathrm{sub}(P^r,G^v)$ , the number of subgraphs $H$ in $G$ isomorphic to $P$ (and such that the isomorphisms map $r$ to $v$ ). In a similar way as for homomorphisms counts, we can either directly cast the computation of $\mathrm{sub}(P^r,G^v)$ in TL resulting again in the use of $p$ index variables. A possible reduction in terms of index variables, however, can be obtained by relying on the result (Theorem 1.1.) by Curcipean et al. (2017) in which it shown that $\mathrm{sub}(P^r,G^v)$ can be computed in terms of homomorphism counts of graph patterns derived from $P^r$ . More precisely, Curcipean et al. (2017) define $\mathrm{spasm}(P^r)$ as the set of graphs consisting of all possible homomorphic images of $P^r$ . It is then readily verified that if the maximal treewidth of the graphs in $\mathrm{spasm}(P^r)$ is $k$ , then $\mathrm{sub}(P^r,G^v)$ can be cast as an expression in $\mathrm{TL}_{k+1}$ . Hence, GSNs using a pattern collection $\mathcal{F}$ can be represented in $\mathrm{TL}_{k+1}$ , where $k$ is the maximal treewidth of graphs in any of the spams of patterns in $\mathcal{F}$ , and thus are bounded in separation power $k$ -WL in accordance to the results by Barceló et al. (2021). + +As a final example, we consider the recently introduced Message Passing Simplicial Networks (MPSNs) (Bodnar et al., 2021). In a nutshell, MPSNs are run on simplicial complexes of graphs instead of on the original graphs. We sketch how our tensor language approach can be used to assess the separation power of MPSNs on clique complexes. We use the simplified version of MPSNs which have the same expressive power as the full version of MPSNs (Theorem 6 in Bodnar et al. (2021)). + +We recall some definitions. Let $\operatorname{Clses}(G)$ denote the set of all cliques in $G$ . Given two cliques $c$ and $c'$ in $\operatorname{Clses}(G)$ , define $c \prec c'$ if $c \subset c'$ and there exists no $c''$ in $\operatorname{Clses}(G)$ , such that $c \subset c'' \subset c'$ . We define $\operatorname{Boundary}(c, G) := \{c' \in \operatorname{Clses}(G) \mid c' \prec c\}$ and $\operatorname{Upper}(c, G) := \{c' \in \operatorname{Clses}(G) \mid \exists c'' \in \operatorname{Clses}(G), c' \prec c''\text{ and } c \prec c''\}$ . + +For each $c$ in $\mathsf{Clique}(G)$ we have an initial feature vector $\pmb{F}_{c}^{(0)}\in \mathbb{R}^{1\times \ell}$ . Bodnar et al. (2021) initialize all initial features with the same value. Then, in layer $t$ , for each $c\in \mathsf{Clique}(G)$ , features are updated as follows: + +$$ +\boldsymbol {G} _ {c:} ^ {(t)} = F _ {B} \left(\left\{\left\{\mathfrak {m l p} _ {B} \left(\boldsymbol {F} _ {c:} ^ {(t - 1)}, \boldsymbol {F} _ {c ^ {\prime}:} ^ {(t - 1)}\right) \mid c ^ {\prime} \in \text {B o u n d a r y} (G, c) \right\} \right\}\right) +$$ + +$$ +\boldsymbol {H} _ {c:} ^ {(t)} = F _ {U} \left(\left\{\left\{\mathfrak {m l p} _ {U} \left(\boldsymbol {F} _ {c:} ^ {(t - 1)}, \boldsymbol {F} _ {c ^ {\prime}:} ^ {(t - 1)}, \boldsymbol {F} _ {c \cup c ^ {\prime}:} ^ {(t - 1)}\right) \mid c ^ {\prime} \in \operatorname {U p p e r} (G, c) \right\} \right\}\right) +$$ + +$$ +\boldsymbol {F} _ {c}: ^ {(t)} = \mathfrak {m} \mathbb {l p} (\boldsymbol {F} _ {c}: ^ {(t - 1)}, \boldsymbol {G} _ {c}: ^ {(t)}, \boldsymbol {H} _ {c}: ^ {(t)}), +$$ + +where $F_{B}$ and $F_{U}$ are aggregation functions and $\mathsf{mlp}_B$ , $\mathsf{mlp}_U$ and $\mathsf{mlp}$ are MLPs. With some effort, one can represent these computations by expressions in $\mathsf{T}\mathsf{L}_{p}(\Omega ,\Theta)$ where $p$ is largest clique in $G$ . As such, the separation power of clique-complex MPSNs on graphs of clique size at most $p$ is bounded by $p - 1$ -WL. And indeed, Bodnar et al. (2021) consider Rook's $4\times 4$ graph, which contains a 4-clique, and the Shirkhande graph, which does not contain a 4-clique. As such, the analysis above implies that clique-complex MPSNs are bounded by 2-WL on the Shrikhande graph, and by 3-WL on Rook's graph, consistent with the observation in Bodnar et al. (2021). A more detailed analysis of MPSNs in terms of summation depth and for other simplicial complexes is left as future work. + +This illustrates again that our approach can be used to assess the separation power of a variety of GNN architectures in terms of $k$ -WL, by simply writing them as tensor language expressions. Furthermore, bounds in terms of $k$ -WL can be used for augmented GNNs which form a more efficient way of incorporating higher-order graph structural information than higher-order GNNs. + +# D.4 SPECTRAL GNNS + +In general, spectral GNNs are defined in terms of eigenvectors and eigenvalues of the (normalized) graph Laplacian (Bruna et al., 2014; Defferrard et al., 2016; Levie et al., 2019; Balcilar et al., 2021b)). The diagonalization of the graph Laplacian is, however, avoided in practice, due to its excessive cost. Instead, by relying on approximation results in spectral graph analysis (Hammond et al., 2011), the layers of practical spectral GNNs are defined in term propagation matrices consisting of functions, which operate directly on the graph Laplacian. This viewpoint allows for a spectral analysis of spectral and "spatial" GNNs in a uniform way, as shown by Balcilar et al. (2021b). In this section, we consider two specific instances of spectral GNNs: ChebNet (Defferrard et al., 2016) and CayleyNet (Levie et al., 2019), and assess their separation power in terms of tensor logic. Our general results then provide bounds on their separation power in terms color refinement and 2-WL, respectively. + +Chebnet. The separation power of ChebNet (Defferrard et al., 2016) was already analyzed in Balcilar et al. (2021a) by representing them in the MATLAB matrix query language (Brijder et al., 2019). It was shown (Theorem 2 (Balcilar et al., 2021a)) that it is only the maximal eigenvalue $\lambda_{\mathrm{max}}$ of the graph Laplacian used in the layers of ChebNet that may result in the separation power of ChebNet to go beyond 1-WL. We here revisit and refine this result by showing that, when ignoring the use of $\lambda_{\mathrm{max}}$ , the separation power of Chebnet is bounded already by color refinement (which, as mentioned in Section 2, is weaker than 1-WL for vertex embeddings). In a nutshell, the layers of a ChebNet are defined in terms of Chebyshev polynomials of the normalized Laplacian $L_{\mathrm{norm}} = I - D^{-1/2} \cdot A \cdot D^{-1/2}$ and these polynomials can be easily represented in $\mathrm{GTL}(\Omega)$ . One can alternatively use the graph Laplacian $L = D - A$ in a ChebNet, which allows for a similar analysis. The distinction between the choice of $L_{\mathrm{norm}}$ and $L$ only shows in the needed summation depth (in as in similar way as for the GCNs described earlier). We only consider the normalized Laplacian here. + +More precisely, following Balcilar et al. (2021a;b), in layer $t$ , vertex embeddings are updated in a ChebNet according to: + +$$ +\boldsymbol {F} ^ {(t)} := \sigma \left(\sum_ {s = 1} ^ {p} \boldsymbol {C} ^ {(s)} \cdot \boldsymbol {F} ^ {(t - 1)} \cdot \boldsymbol {W} ^ {(t - 1, s)}\right), +$$ + +with + +$$ +C ^ {(1)} := I, C ^ {(2)} = \frac {2}{\lambda_ {\max }} L _ {\text {n o r m}} - I, C ^ {(s)} = 2 C ^ {(2)} \cdot C ^ {(s - 1)} - C ^ {(s - 2)}, \text {f o r} s \geq 3, +$$ + +and where $\lambda_{\mathrm{max}}$ denotes the maximum eigenvalue of $L_{\mathrm{norm}}$ . We next use a similar analysis as in Balcilar et al. (2021a). That is, we ignore for the moment the maximal eigenvalue $\lambda_{\mathrm{max}}$ and redefine $C^{(2)}$ as $cL_{\mathrm{norm}} - I$ for some constant $c$ . We thus see that each $C^{(s)}$ is a polynomial of the form $p_s(c,L_{\mathrm{norm}}):=\sum_{i=0}^{q_s}a_i^{(s)}(c)\cdot(L_{\mathrm{norm}})^i$ with scalar functions $a_i^{(s)}:\mathbb{R}\to \mathbb{R}$ and where we interpret $(L_{\mathrm{norm}})^0 = I$ . To upper bound the separation power using our tensor language approach, + +we can thus shift our attention entirely to representing $(\pmb{L}_{norm})^i \cdot \pmb{F}^{(t-1)} \cdot \pmb{W}^{(t-1,s)}$ for powers $i \in \mathbb{N}$ . Furthermore, since $(\pmb{L}_{norm})^i$ is again a polynomial of the form $q_i(D^{-1/2} \cdot \pmb{A} \cdot D^{-1/2}) := \sum_{j=0}^{r_i} b_{ij} \cdot (D^{-1/2} \cdot \pmb{A} \cdot D^{-1/2})^j$ , we can further narrow down the problem to represent + +$$ +\left(\boldsymbol {D} ^ {- 1 / 2} \cdot \boldsymbol {A} \cdot \boldsymbol {D} ^ {- 1 / 2}\right) ^ {j} \cdot \boldsymbol {F} ^ {(t - 1)} \cdot \boldsymbol {W} ^ {(t - 1, s)} +$$ + +in $\mathrm{GTL}(\Omega)$ , for powers $j\in \mathbb{N}$ . And indeed, combining our analysis for GCNs and SGCs results in expressions in $\mathrm{GTL}(\Omega)$ . As an example let us consider $(D^{-1 / 2}\cdot A\cdot D^{-1 / 2})^2\cdot F^{(t - 1)}\cdot W^{(t - 1)}$ , that is we use a power of two. It then suffices to define, for each output dimension $j$ , the expressions: + +$$ +\begin{array}{l} \psi_ {j} ^ {2} (x _ {1}) = f _ {1 / \sqrt {x}} \left(\sum_ {x _ {2}} E (x _ {1}, x _ {2})\right) \cdot \sum_ {x _ {2}} \left(E (x _ {1}, x _ {2}) \cdot f _ {1 / x} \left(\sum_ {x _ {1}} E (x _ {2}, x _ {1})\right) \cdot \right. \\ \left. \sum_ {x _ {1}} \Big (E (x _ {2}, x _ {1}) \cdot f _ {1 / \sqrt {x}} (\sum_ {x _ {2}} E (x _ {1}, x _ {2})) \cdot \big (\sum_ {i = 1} ^ {d _ {t - 1}} W _ {i j} ^ {(t - 1)} \varphi_ {i} ^ {(t - 1)} (x _ {1}) \big) \Big)\right), \\ \end{array} +$$ + +where the $\varphi_{i}^{(t - 1)}(x_{1})$ are expressions representing layer $t - 1$ . It is then readily verified that we can use $\psi_j^2 (x_1)$ to cast layer $t$ of a ChebNet in $\mathrm{GTL}(\Omega)$ with $\Omega$ consisting of $f_{1 / \sqrt{x}}:\mathbb{R}\to \mathbb{R}:x\mapsto \frac{1}{\sqrt{x}}$ , $f_{1 / x}:\mathbb{R}\to \mathbb{R}:x\mapsto \frac{1}{x}$ , and the used activation function $\sigma$ . We thus recover (and slightly refine) Theorem 2 in Balcilar et al. (2021a): + +Corollary D.1. On graphs sharing the same $\lambda_{\mathrm{max}}$ values, the separation power of ChebNet is bounded by color refinement, both for graph and vertex embeddings. + +A more fine-grained analysis of the expressions is needed when interested in bounding the summation depth and thus of the number of rounds needed for color refinement. Moreover, as shown by Balcilar et al. (2021a), when graphs have non-regular components with different $\lambda_{\mathrm{max}}$ values, ChebNet can distinguish them, whilst 1-WL cannot. To our knowledge, $\lambda_{\mathrm{max}}$ cannot be computed in $\mathsf{T}\mathsf{L}_k(\Omega)$ for any $k$ . This implies that it not clear whether an upper bound on the separation power can be obtained for ChebNet taking $\lambda_{\mathrm{max}}$ into account. It is an interesting open question whether there are two graphs $G$ and $H$ which cannot be distinguished by $k$ -WL but can be distinguished based on $\lambda_{\mathrm{max}}$ . A positive answer would imply that the computation of $\lambda_{\mathrm{max}}$ is beyond reach for $\mathsf{T}\mathsf{L}(\Omega)$ and other techniques are needed. + +CayleyNet. We next show how the separation power of CayleyNet (Levie et al., 2019) can be analyzed. To our knowledge, this analysis is new. We show that the separation power of CayleyNet is bounded by 2-WL. Following Levie et al. (2019) and Balcilar et al. (2021b), in each layer $t$ , a CayleyNet updates features as follows: + +$$ +\boldsymbol {F} ^ {(t)} := \sigma \left(\sum_ {s = 1} ^ {p} \boldsymbol {C} ^ {(s)} \cdot \boldsymbol {F} ^ {(t - 1)} \boldsymbol {W} ^ {(t - 1, s)}\right), +$$ + +with + +$$ +\boldsymbol {C} ^ {(1)} := \boldsymbol {I}, \boldsymbol {C} ^ {(2 s)} := \operatorname {R e} \left(\left(\frac {h \boldsymbol {L} - \imath \boldsymbol {I}}{h \boldsymbol {L} + \imath \boldsymbol {I}}\right) ^ {s}\right), \boldsymbol {C} ^ {(2 s + 1)} := \operatorname {R e} \left(\imath \left(\frac {h \boldsymbol {L} - \imath \boldsymbol {I}}{h \boldsymbol {L} + \imath \boldsymbol {I}}\right) ^ {s}\right), +$$ + +where $h$ is a constant, $\iota$ is the imaginary unit, and $\operatorname{Re} : \mathbb{C} \to \mathbb{C}$ maps a complex number to its real part. We immediately observe that a CayleyNet requires the use of complex numbers and matrix inversion. So far, we considered real numbers only, but when our separation results are concerned, the choice between real or complex numbers is insignificant. In fact, only the proof of Proposition C.3 requires a minor modification when working on complex numbers: the infinite disjunctions used in the proof now need to range over complex numbers. For matrix inversion, when dealing with separation power, one can use different expressions in $\mathrm{TL}(\Omega)$ for computing the matrix inverse, depending on the input size. And indeed, it is well-known (see e.g., Csanky (1976)) that based on the characteristic polynomial of $A$ , $A^{-1}$ for any matrix $A \in \mathbb{R}^{n \times n}$ can be computed as a polynomial $\frac{-1}{c_n} \sum_{i=1}^{n-1} c_i A^{n-1-i}$ if $c_n \neq 0$ and where each coefficient $c_i$ is a polynomial in $\operatorname{tr}(A^j)$ , for various $j$ . Here, $\operatorname{tr}(\cdot)$ is the trace of a matrix. As a consequence, layers in CayleyNet can be viewed as polynomials in $hL - \iota I$ with coefficients polynomials in $\operatorname{tr}((hL - \iota I)^j)$ . One now needs three index variables to represent the trace computations $\operatorname{tr}((hL - \iota I)^j)$ . Indeed, let $\varphi_0(x_1, x_2)$ be + +the $\mathrm{TL}_2$ expression representing $h\pmb {L} - \iota \pmb{I}$ . Then, for example, $(h\pmb {L} - \iota \pmb {I})^j$ can be computed in $\mathrm{TL}_3$ using + +$$ +\varphi_ {j} (x _ {1}, x _ {2}) := \sum_ {x _ {3}} \varphi_ {0} (x _ {1}, x _ {3}) \cdot \varphi_ {j - 1} (x _ {3}, x _ {2}) +$$ + +and hence $\mathrm{tr}((h\pmb {L} - \iota \pmb {I})^j)$ is represented by $\sum_{x_1}\sum_{x_2}\varphi_j(x_1,x_2)\cdot \mathbf{1}_{x_1 = x_2}$ . In other words, we obtain expressions in $\mathsf{T L}_3$ . The polynomials in $h\pmb {L} - \iota \pmb{I}$ can be represented in $\mathsf{T L}_2$ just as for ChebNet. This implies that each layer in CayleyNet can be represented, on graphs of fixed size, by $\mathsf{T L}_3(\Omega)$ expressions, where $\Omega$ includes the activation function $\sigma$ and the function Re. This suffices to use our general results and conclude that CayleyNets are bounded in separation power by 2-WL. An interesting question is to find graphs that can be separated by a CayleyNet but not by 1-WL. We leave this as an open problem. + +# E PROOF OF THEOREM 5.1 + +We here consider another higher-order GNN proposal: the invariant graph networks or $k$ -IGNs of Maron et al. (2019b). By contrast to $k$ -FGNNs, $k$ -IGNs are linear architectures. If we denote by $k\text{-IGN}^{(t)}$ the class of $t$ layered $k$ -IGNs, then following inclusions are known (Maron et al., 2019b) + +$$ +\rho_ {1} \left(k - \mathsf {I G N} ^ {(t)}\right) \subseteq \rho_ {1} \left(\mathsf {v w l} _ {k - 1} ^ {(t)}\right) \text {a n d} \rho_ {0} \left(k - \mathsf {I G N}\right) \subseteq \rho_ {0} \left(\mathsf {g w l} _ {k - 1} ^ {(\infty)}\right). +$$ + +The reverse inclusions were posed as open problems in Maron et al. (2019a) and were shown to hold by Chen et al. (2020) for $k = 2$ , by means of an extensive case analysis and by relying on properties of 1-WL. In this section, we show that the separation power of $k$ -IGNs is bounded by that of $(k - 1)$ -WL, for arbitrary $k \geq 2$ . Theorem 4.2 tells that we can entirely shift our attention to showing that the layers of $k$ -IGNs can be represented in $\mathsf{TL}_k(\Omega)$ . In other words, we only need to show that $k$ index variables are needed for the layers. As we will see below, this requires a bit of work since a naive representation of the layers of $k$ -IGNs use $2k$ index variables. Nevertheless, we show that this can be reduced to $k$ index variables only. + +By inspecting the expressions needed to represent the layers of $k$ -IGNs in $\mathsf{TL}_k(\Omega)$ , we obtain that a $t$ layer $k$ -IGN $^{(t)}$ require expressions of summation depth of $tk$ . In other words, the correspondence between layers and summation depth is precisely in sync. This implies, by Theorem 4.2: + +$$ +\rho_ {1} \left(k - \mathsf {I G N}\right) = \rho_ {1} \left(\mathsf {w w l} _ {k - 1} ^ {(\infty)}\right), +$$ + +where we ignore the number of layers. We similarly obtain that $\rho_0(k\text{-}\mathsf{IGN}) = \rho_0(\mathsf{gw}|_{k - 1}^{(\infty)})$ , thereby answering the open problem posed in Maron et al. (2019a). Finally, we observe that the $k$ -IGNs used in Maron et al. (2019b) to show the inclusion $\rho_1(k\text{-}\mathsf{IGN}^{(t)}) \subseteq \rho_1(\mathsf{vw}|_{k - 1}^{(t)})$ are of very simple form. By defining a simple class of $k$ -IGNs, denoted by $k$ -GINs, we obtain + +$$ +\rho_ {1} \left(k \text {- G I N} ^ {(t)}\right) = \rho_ {1} \left(\mathsf {v w l} _ {k - 1} ^ {(t)}\right), +$$ + +hereby recovering the layer/round connections. + +We start with the following lemma: + +Lemma E.1. For any $k \geq 2$ , a $t$ layer $k$ -IGNs can be represented in $\mathsf{TL}_k^{(tk)}(\Omega)$ . + +Before proving this lemma, we recall $k$ -IGNs. These are architectures that consist of linear equivariant layers. Such linear layers allow for an explicit description. Indeed, following Maron et al. (2019c), let $\sim_{\ell}$ be the equality pattern equivalence relation on $[n]^{\ell}$ such that for $\mathbf{a}, \mathbf{b} \in [n]^{\ell}$ , $\mathbf{a} \sim_{\ell} \mathbf{b}$ if and only if $a_{i} = a_{j} \Leftrightarrow b_{i} = b_{j}$ for all $j \in [\ell]$ . We denote by $[n]^{\ell} / \sim_{\ell}$ the equivalence classes induced by $\sim_{\ell}$ . Let us denote by $\mathbf{F}^{(t-1)} \in \mathbb{R}^{n^k \times d_{t-1}}$ the tensor computed by an $k$ -IGN in layer $t-1$ . Then, in layer $t$ , a new tensor in $\mathbb{R}^{n^k \times d_t}$ is computed, as follows. For $j \in [d_t]$ and $v_1, \ldots, v_k \in [n]^k$ : + +$$ +F _ {v _ {1}, \dots , v _ {k}, j} ^ {(t)} := \sigma \left(\sum_ {\gamma \in [ n ] ^ {2 k} / \sim_ {2 k}} \sum_ {\boldsymbol {w} \in [ n ] ^ {k}} \mathbf {1} _ {(\boldsymbol {v}, \boldsymbol {w}) \in \gamma} \sum_ {i \in [ d _ {t - 1} ]} c _ {\gamma , i, j} F _ {w _ {1}, \dots , w _ {k}, i} ^ {(t - 1)} + \sum_ {\mu \in [ n ] ^ {k} / \sim_ {k}} \mathbf {1} _ {\boldsymbol {v} \in \mu} b _ {\mu , j}\right) \tag {1} +$$ + +for activation function $\sigma$ , constants $c_{\gamma, i, j}$ and $b_{\mu, j}$ in $\mathbb{R}$ and where $\mathbf{1}_{(\boldsymbol{v}, \boldsymbol{w}) \in \gamma}$ and $\mathbf{1}_{\boldsymbol{v} \in \mu}$ are indicator functions for the $2k$ -tuple $(\boldsymbol{v}, \boldsymbol{w})$ to be in the equivalence class $\gamma \in [n]^{2k} / \sim_{2k}$ and the $k$ -tuple $\boldsymbol{v}$ to + +be in class $\mu \in [n]^k / \sim_k$ . As initial tensor $\mathbf{F}^{(0)}$ one defines $F_{v_1, \ldots, v_k, j}^{(0)} \coloneqq \mathsf{atp}_k(G, \mathbf{v}) \in \mathbb{R}^{d_0}$ , with $d_0 = 2\binom{k}{2} + k\ell$ where $\ell$ is the number of initial vertex labels, just as for $k$ -FGNNs. + +We remark that the need for having a summation depth of $tk$ in the expressions in $\mathsf{TL}_k(\Omega)$ , or equivalently for requiring $tk$ rounds of $(k - 1)$ -WL, can intuitively be explained that each layer of a $k$ -IGN aggregates more information from "neighbouring" $k$ -tuples than $(k - 1)$ -WL does. Indeed, in each layer, an $k$ -IGN can use previous tuple embeddings of all possible $k$ -tuples. In a single round of $(k - 1)$ -WL only previous tuple embeddings from specific sets of $k$ -tuples are used. It is only after an additional $k - 1$ rounds, that $k$ -WL gets to the information about arbitrary $k$ -tuples, whereas this information is available in a $k$ -IGN in one layer directly. + +Proof of Lemma E.1. We have seen how $\mathbf{F}^{(0)}$ can be represented in $\mathrm{TL}_k(\Omega)$ when dealing with $k$ -FGNNs. We assume now that also the $t - 1$ th layer $\mathbf{F}^{(t - 1)}$ can be represented by $d_{t - 1}$ expressions in $\mathrm{TL}_k^{((t - 1)k)}(\Omega)$ and show that the same holds for the $t$ th layer. + +We first represent $\mathbf{F}^{(t)}$ in $\mathrm{TL}_{2k}(\Omega)$ , based on the explicit description given earlier. The expressions use index variables $x_{1},\ldots ,x_{k}$ and $y_{1},\ldots ,y_{k}$ . More specifically, for $j\in [d_t]$ we consider the expressions: + +$$ +\begin{array}{l} \varphi_ {j} ^ {(t)} (x _ {1}, \dots , x _ {k}) = \sigma \left(\sum_ {\gamma \in [ n ] ^ {2 k} / \sim_ {2 k}} \sum_ {i = 1} ^ {d _ {t - 1}} c _ {\gamma , i, j} \right. \\ \sum_ {y _ {1}} \dots \sum_ {y _ {k}} \psi_ {\gamma} (x _ {1}, \dots , x _ {k}, y _ {1}, \dots , y _ {k}) \cdot \varphi_ {i} ^ {(t - 1)} (y _ {1}, \dots , y _ {k}) \\ \left. + \sum_ {\mu \in [ n ] ^ {k} / \sim_ {k}} b _ {\mu , j} \cdot \psi_ {\mu} \left(x _ {1}, \dots , x _ {k}\right)\right), \tag {2} \\ \end{array} +$$ + +where $\psi_{\mu}(x_1,\ldots ,x_k)$ is a product of expressions of the form $\mathbf{1}_{x_i\mathrm{op}x_j}$ encoding the equality pattern $\mu$ , and similarly, $\psi_{\gamma}(x_1,\ldots ,x_k,y_1,\ldots ,y_k)$ is a product of expressions of the form $\mathbf{1}_{x_i\mathrm{op}x_j},\mathbf{1}_{y_i\mathrm{op}y_j}$ and $\mathbf{1}_{x_i\mathrm{op}y_j}$ encoding the equality pattern $\gamma$ . These expressions are indicator functions for the their corresponding equality patterns. That is, + +$$ +\llbracket \psi_ {\gamma}, (\boldsymbol {v}, \boldsymbol {w}) \rrbracket_ {G} = \left\{ \begin{array}{l l} 1 & \text {i f} (\boldsymbol {v}, \boldsymbol {w}) \in \gamma \\ 0 & \text {o t h e r w i s e} \end{array} \right. \quad \llbracket \psi_ {\mu}, \boldsymbol {v} \rrbracket_ {G} = \left\{ \begin{array}{l l} 1 & \text {i f} \boldsymbol {v} \in \mu \\ 0 & \text {o t h e r w i s e} \end{array} \right. +$$ + +We remark that in the expressions $\varphi_j^{(t)}$ we have two kinds of summations: those ranging over a fixed number of elements (over equality patterns, feature dimension), and those ranging over the index variables $y_1,\ldots ,y_k$ . The latter are the only ones contributing the summation depth. The former are just concise representations of a long summation over a fixed number of expressions. + +We now only need to show that we can equivalently write $\varphi_j^{(t)}(x_1,\ldots ,x_k)$ as expressions in $\mathsf{TL}_k(\Omega)$ , that is, using only indices $x_{1},\ldots ,x_{k}$ . As such, we can already ignore the term $\sum_{\mu \in [n]^{k} / \sim_{k}}b_{\mu ,j}\cdot$ $\psi_{\mu}(x_1,\dots,x_k)$ since this is already in $\mathsf{TL}_k(\Omega)$ . Furthermore, this expressions does not affect the summation depth. + +Furthermore, as just mentioned, we can expand expression $\varphi_j^{(t)}$ into linear combinations of other simpler expressions. As such, it suffices to show that $k$ index variables suffice for each expression of the form: + +$$ +\sum_ {y _ {1}} \dots \sum_ {y _ {k}} \psi_ {\gamma} \left(x _ {1}, \dots , x _ {k}, y _ {1}, \dots , y _ {k}\right) \cdot \varphi_ {i} ^ {(t)} \left(y _ {1}, \dots , y _ {k}\right), \tag {3} +$$ + +obtained by fixing $\mu$ and $i$ in expression (2). To reduce the number of variables, as a first step we eliminate any disequality using the inclusion-exclusion principle. More precisely, we observe that $\psi_{\gamma}(\pmb{x},\pmb{y})$ can be written as: + +$$ +\prod_ {(i, j) \in I} \mathbf {1} _ {x _ {i} = x _ {j}} \cdot \prod_ {(i, j) \in \bar {I}} \mathbf {1} _ {x _ {i} \neq x _ {j}} \cdot \prod_ {(i, j) \in J} \mathbf {1} _ {y _ {i} = y _ {j}} \cdot \prod_ {(i, j) \in \bar {J}} \mathbf {1} _ {y _ {i} \neq y _ {j}} \prod_ {(i, j) \in K} \mathbf {1} _ {x _ {i} = y _ {j}} \cdot \prod_ {(i, j) \in \bar {K}} \mathbf {1} _ {x _ {i} \neq y _ {j}} +$$ + +$$ += \sum_ {A \subseteq \bar {I}} \sum_ {B \subseteq \bar {J}} \sum_ {C \subseteq \bar {K}} (- 1) ^ {| A | + | B | + | C |} \prod_ {(i, j) \in I \cup A} \mathbf {1} _ {x _ {i} = x _ {j}} \prod_ {(i, j) \in J \cup B} \mathbf {1} _ {y _ {i} = y _ {j}} \cdot \prod_ {(i, j) \in J \cup C} \mathbf {1} _ {x _ {i} = y _ {j}}, \tag {4} +$$ + +for some sets $I$ , $J$ and $K$ of pairs of indices in $[k]^2$ , and where $\bar{I} = [k]^2 \setminus I$ , $\bar{J} = [k]^2 \setminus J$ and $\bar{K} = [k]^2 \setminus K$ . Here we use that $\mathbf{1}_{x_i \neq x_j} = 1 - \mathbf{1}_{x_i = x_j}$ , $\mathbf{1}_{y_i \neq y_j} = 1 - \mathbf{1}_{y_i = y_j}$ and $\mathbf{1}_{x_i \neq y_j} = 1 - \mathbf{1}_{y_i = y_j}$ and use the inclusion-exclusion principle to obtain a polynomial in equality conditions only. + +In view of expression (4), we can push the summations over $y_{1},\ldots ,y_{k}$ in expression (3) to the subexpressions that actually use $y_{1},\ldots ,y_{k}$ . That is, we can rewrite expression (3) into the equivalent expression: + +$$ +\begin{array}{l} \sum_ {A \subseteq \bar {I}} \sum_ {B \subseteq \bar {J}} \sum_ {C \subseteq \bar {K}} (- 1) ^ {| A | + | B | + | C |}. \prod_ {(i, j) \in I \cup A} \mathbf {1} _ {x _ {i} = x _ {j}} \\ \cdot \left(\sum_ {y _ {1}} \dots \sum_ {y _ {k}} \prod_ {(i, j) \in J \cup B} \mathbf {1} _ {y _ {i} = y _ {j}} \cdot \prod_ {(i, j) \in K \cup C} \mathbf {1} _ {x _ {i} = y _ {j}} \cdot \varphi_ {i} ^ {(t - 1)} \left(y _ {1}, \dots , y _ {k}\right)\right). \tag {5} \\ \end{array} +$$ + +By fixing $A, B$ and $C$ , it now suffices to argue that + +$$ +\prod_ {(i, j) \in I \cup A} \mathbf {1} _ {x _ {i} = x _ {j}} \cdot \left(\sum_ {y _ {1}} \dots \sum_ {y _ {k}} \prod_ {(i, j) \in J \cup B} \mathbf {1} _ {y _ {i} = y _ {j}} \cdot \prod_ {(i, j) \in K \cup C} \mathbf {1} _ {x _ {i} = y _ {j}} \cdot \varphi_ {i} ^ {(t - 1)} \left(y _ {1}, \dots , y _ {k}\right)\right), \tag {6} +$$ + +can be equivalently expressed in $\mathsf{TL}_k(\Omega)$ + +Since our aim is to reduce the number of index variables from $2k$ to $k$ , it is important to know which variables are the same. In expression (6), some equalities that hold between the variables may not be explicitly mentioned. For this reason, we expand $I \cup A$ , $J \cup B$ and $K \cup C$ with their implied equalities. That is, $\mathbf{1}_{x_i = x_j}$ is added to $I \cup A$ , if for any $(v, w)$ such that + +$$ +[ [ \prod_ {(i, j) \in I \cup A} \mathbf {1} _ {x _ {i} = x _ {j}} \cdot \prod_ {(i, j) \in J \cup B} \mathbf {1} _ {y _ {i} = y _ {j}} \cdot \prod_ {(i, j) \in K \cup C} \mathbf {1} _ {x _ {i} = y _ {j}}, (\boldsymbol {v}, \boldsymbol {w}) ] ] _ {G} = 1 \Rightarrow [ [ \mathbf {1} _ {x _ {i} = x _ {j}}, \boldsymbol {v} ] ] _ {G} = 1 +$$ + +holds. Similar implied equalities $\mathbf{1}_{y_i = y_j}$ and $\mathbf{1}_{x_i = y_j}$ are added to $J\cup B$ and $K\cup C$ , respectively. Let us denote by $I^{\prime}$ , $J^{\prime}$ and $K^{\prime}$ . It should be clear that we can add these implied equalities to expression (6) without changing its semantics. In other words, expression (6) can be equivalently represented by + +$$ +\prod_ {(i, j) \in I ^ {\prime}} \mathbf {1} _ {x _ {i} = x _ {j}} \cdot \left(\sum_ {y _ {1}} \dots \sum_ {y _ {k}} \prod_ {(i, j) \in J ^ {\prime}} \mathbf {1} _ {y _ {i} = y _ {j}} \cdot \prod_ {(i, j) \in K ^ {\prime}} \mathbf {1} _ {x _ {i} = y _ {j}} \cdot \varphi_ {i} ^ {(t - 1)} \left(y _ {1}, \dots , y _ {k}\right)\right), \tag {7} +$$ + +There now two types of index variables among the $y_{1}, \ldots, y_{k}$ : those that are equal to some $x_{i}$ , and those that are not. Now suppose that $(j, j') \in J'$ , and thus $y_{j} = y_{j'}$ , and that also $(i, j) \in K'$ , and thus $x_{i} = y_{j}$ . Since we included the implied equalities, we also have $(i, j') \in K'$ , and thus $x_{i} = y_{j'}$ . There is no reason to keep $(j, j') \in J'$ as it is implied by $(i, j)$ and $(i, j') \in K'$ . We can thus safely remove all pairs $(j, j')$ from $J'$ such that $(i, j) \in K'$ (and thus also $(i, j') \in K'$ ). We denote by $J''$ be the reduced set of pairs of indices obtained from $J'$ in this way. We have that expression (7) can be equivalently written as + +$$ +\prod_ {(i, j) \in I ^ {\prime}} \mathbf {1} _ {x _ {i} = x _ {j}} \cdot \left(\sum_ {y _ {1}} \dots \sum_ {y _ {k}} \prod_ {(i, j) \in K ^ {\prime}} \mathbf {1} _ {x _ {i} = y _ {j}} \cdot \prod_ {(i, j) \in J ^ {\prime \prime}} \mathbf {1} _ {y _ {i} = y _ {j}} \cdot \varphi_ {i} ^ {(t - 1)} \left(y _ {1}, \dots , y _ {k}\right)\right), \tag {8} +$$ + +where we also switched the order of equalities in $J''$ and $K'$ . Our construction of $J''$ and $K'$ ensures that none of the variables $y_j$ with $j$ belonging to a pair in $J''$ is equal to some $x_i$ . + +By contrast, the variable $y_{j}$ occurring in $(i,j) \in K'$ are equal to $x_{i}$ . We observe, however, that also certain equalities among the variables $\{x_{1},\ldots ,x_{k}\}$ hold, as represented by the pairs in $I^{\prime}$ . Let $I^{\prime}(i) := \{i^{\prime} \mid (i,i^{\prime}) \in I^{\prime}\}$ and define $\hat{i}$ as a unique representative element in $I^{\prime}(i)$ . For example, one can take $\hat{i}$ to be smallest index in $I^{\prime}(i)$ . We use this representative index (and corresponding $x$ -variable) to simplify $K^{\prime}$ . More precisely, we replace each pair $(i,j) \in K^{\prime}$ with the pair $(\hat{i},\hat{j})$ . In + +terms of variables, we replace $x_{i} = y_{j}$ with $x_{\hat{i}} = y_{j}$ . Let $K''$ be the set $K''$ modified in that way. Expression (8) can thus be equivalently written as + +$$ +\prod_ {(i, j) \in I ^ {\prime}} \mathbf {1} _ {x _ {i} = x _ {j}} \cdot \left(\sum_ {y _ {1}} \dots \sum_ {y _ {k}} \prod_ {(\hat {i}, j) \in K ^ {\prime \prime}} \mathbf {1} _ {x _ {\hat {i}} = y _ {j}} \cdot \prod_ {(i, j) \in J ^ {\prime \prime}} \mathbf {1} _ {y _ {i} = y _ {j}} \cdot \varphi_ {i} ^ {(t - 1)} \left(y _ {1}, \dots , y _ {k}\right)\right), \tag {9} +$$ + +where the free index variables of the subexpression + +$$ +\sum_ {y _ {1}} \dots \sum_ {y _ {k}} \prod_ {(\hat {i}, j) \in K ^ {\prime \prime}} \mathbf {1} _ {x _ {i} = y _ {j}} \cdot \prod_ {(i, j) \in J ^ {\prime \prime}} \mathbf {1} _ {y _ {i} = y _ {j}} \cdot \varphi_ {i} ^ {(t - 1)} \left(y _ {1}, \dots , y _ {k}\right) \tag {10} +$$ + +are precisely the index variables $x_{\hat{i}}$ for $(\hat{i}, j) \in K''$ . Recall that our aim is to reduce the variables from $2k$ to $k$ . We are now finally ready to do this. More specifically, we consider a bijection $\beta : \{y_1, \ldots, y_k\} \to \{x_1, \ldots, x_k\}$ in which ensure that for each $\hat{i}$ there is a $j$ such that $(\hat{i}, j) \in K''$ and $\beta(y_j) = x_{\hat{i}}$ . Furthermore, among the summations $\sum_{y_1} \cdots \sum_{y_k}$ we can ignore those for which $\beta(y_j) = x_{\hat{i}}$ holds. After all, they only contribute for a given $x_{\hat{i}}$ value. Let $Y$ be those indices in $[k]$ such that $\beta(y_j) \neq x_{\hat{i}}$ for some $\hat{i}$ . Then, we can equivalently write expression (9) as + +$$ +\begin{array}{l} \prod_ {(i, j) \in I ^ {\prime}} \mathbf {1} _ {x _ {i} = x _ {j}} \cdot \left(\sum_ {\beta (y _ {i}), i \in Y} \prod_ {(\hat {i}, j) \in K ^ {\prime}} \mathbf {1} _ {x _ {i} = \beta (y _ {j})} \cdot \prod_ {(i, j) \in J ^ {\prime \prime}} \mathbf {1} _ {\beta (y _ {i}) = \beta (y _ {j})} \right. \\ \left. \cdot \beta \left(\varphi_ {i} ^ {(t - 1)} \left(y _ {1}, \dots , y _ {k}\right)\right)\right), \tag {11} \\ \end{array} +$$ + +where $\beta (\varphi_i^{(t - 1)}(y_1,\ldots ,y_k))$ denotes the expression obtained by renaming of variables $y_{1},\dots,y_{j}$ in $\varphi_i^{(t - 1)}(y_1,\ldots ,y_k)$ into $x$ -variables according to $\beta$ . This is our desired expression in $\mathsf{TL}_k(\Omega)$ . If we analyze the summation depth of this expression, we have by induction that the summation depth of $\varphi_i^{(t - 1)}$ is at most $(t - 1)k$ . In the above expression, we are increasing the summation depth with at most $|Y|$ . The largest size of $Y$ is $k$ , which occurs when none of the $y$ -variables are equal to any of the $x$ -variables. As a consequence, we obtained an expression of summation depth at most $tk$ , as desired. + +As a consequence, when using $k$ -IGNs $^{(t)}$ for vertex embeddings, using $(G,v)\to \mathbf{F}_{v,\dots ,v,:}^{(t)}$ one simply pads the layer expression with $\prod_{i\in [k]}\mathbf{1}_{x_1 = x_i}$ which does not affect the number of variables or summation depth. When using $k$ -IGNs $^{(t)}$ of graph embeddings, an additional invariant layer is added to obtain an embedding from $G\rightarrow \mathbb{R}^{d_t}$ . Such invariant layers have a similar (simpler) representation as given in equation 1 (Maron et al., 2019c), and allow for a similar analysis. One can verify that expressions in $\mathrm{T}\mathsf{L}_k^{((t + 1)k)}(\Omega)$ are needed when such an invariant layer is added to previous $t$ layers. Based on this, Theorem 4.2, Lemma E.1 and Theorem 1 in Maron et al. (2019b), imply that $\rho_{1}(k\text{-IGN}) = \rho_{1}(\mathsf{vw}|_{k - 1}^{\infty})$ and $\rho_0(k\text{-IGN}) = \rho_0(\mathsf{gw}|_{k - 1}^{\infty})$ hold. + +$k$ -dimensional GINs. We can recover a layer-based characterization for $k$ -IGNs that compute vertex embeddings by considering a special subset of $k$ -IGNs. Indeed, the $k$ -IGNs used in Maron et al. (2019b) to show $\rho_{1}(\mathsf{w}|_{k - 1}^{(t)}) \subseteq \rho_{1}(k\text{-}\mathsf{IGN}^{(t)})$ are of a very special form. We extract the essence of these special $k$ -IGNs in the form of $k$ -dimensional GINs. That is, we define the class $k$ -GINs to consist of layers defined as follows. The initial layers are just as for $k$ -IGNs. Then, for $t \geq 1$ : + +$$ +\begin{array}{l} \mathbf {F} _ {v _ {1}, \dots , v _ {k,:}} ^ {(t)} := \mathsf {m l p} _ {0} ^ {(t)} \big (\mathbf {F} _ {v _ {1}, \dots , v _ {k,:}} ^ {(t - 1)}, \sum_ {u \in V _ {G}} \mathsf {m l p} _ {1} ^ {(t)} \big (\mathbf {F} _ {u, v _ {2}, \dots , v _ {k,:}} ^ {(t - 1)} \big), \sum_ {u \in V _ {G}} \mathsf {m l p} _ {1} ^ {(t)} \big (\mathbf {F} _ {v _ {1}, u, \dots , v _ {k,:}} ^ {(t - 1)} \big) \\ , \dots , \sum_ {u \in V _ {G}} \mathsf {m l p} _ {1} ^ {(t)} \left(\mathbf {F} _ {v _ {1}, v _ {2},.., v _ {k - 1}, w,:})\right)), \\ \end{array} +$$ + +where $F_{v_1,v_2,\ldots ,v_k,:}^{(t - 1)}\in \mathbb{R}^{d_t - 1}$ , $\mathsf{mlp}_1^{(t)}:\mathbb{R}^{d_{t - 1}}\to \mathbb{R}^{bt}$ and $\mathsf{mlp}_1^{(t)}:\mathbb{R}^{d_{t - 1} + kb_t}\to \mathbb{R}^{dt}$ are MLPs. It is now an easy exercise to show that k-GIN can be represented in $\mathsf{T}\mathsf{L}_k^{(t)}(\Omega)$ (remark that the summations used increase the summation depth with one only in each layer). Combined with Theorem 4.2 and by inspecting the proof of Theorem 1 in Maron et al. (2019b), we obtain: + +Proposition E.2. For any $k \geq 2$ and any $t \geq 0$ : $\rho_1(k - \mathsf{GIN}^{(t)}) = \rho_1(\mathsf{vw}|_{k - 1}^{(t)})$ . + +We can define the invariant version of $k$ -IGNs by adding a simple readout layer of the form + +$$ +\sum_ {v _ {1}, \ldots , v _ {k} \in V _ {G}} \mathsf {m l p} (\mathbf {F} _ {v _ {1}, \ldots , v _ {k},:} ^ {(t)}), +$$ + +as is used in Maron et al. (2019b). We obtain, $\rho_0(k\text{-GIN}) = \rho_0(\mathsf{gw}|_{k - 1}^{(\infty)})$ , by simply rephrasing the readout layer in $\mathsf{T}\mathsf{L}_k(\Omega)$ . + +# F DETAILS OF SECTION 6 + +Let $\mathcal{C}(\mathcal{G}_s,\mathbb{R}^\ell)$ be the class of all continuous functions from $\mathcal{G}_s$ to $\mathbb{R}^{\ell}$ . We always assume that $\mathcal{G}_s$ forms a compact space. For example, when vertices are labeled with values in $\{0,1\}^{\ell_0}$ , $\mathcal{G}_s$ is a finite set which we equip with the discrete topology. When vertices carry labels in $\mathbb{R}^{\ell_0}$ we assume that these labels come from a compact set $K\subset \mathbb{R}^{\ell_0}$ . In this case, one can represent graphs in $\mathcal{G}_s$ by elements in $(\mathbb{R}^{\ell_0})^2$ and the topology used is the one induced by some metric $\| .\|$ on the reals. Similarly, we equip $\mathbb{R}^\ell$ with the topology induced by some metric $\| .\|$ . + +Consider $\mathcal{F} \subseteq \mathcal{C}(\mathcal{G}_s, \mathbb{R}^\ell)$ and define $\overline{\mathcal{F}}$ as the closure of $\mathcal{F}$ in $\mathcal{C}(\mathcal{G}_s, \mathbb{R}^\ell)$ under the usual topology induced by $f \mapsto \sup_{G,\pmb{v}} \| f(G,\pmb{v})\|$ . In other words, a continuous function $h: \mathcal{G}_s \to \mathbb{R}^\ell$ is in $\overline{\mathcal{F}}$ if there exists a sequence of functions $f_1, f_2, \ldots \in \mathcal{F}$ such that $\lim_{i \to \infty} \sup_{G,\pmb{v}} \| f_i(G,\pmb{v}) - h(G,\pmb{v})\| = 0$ . The following theorem provides a characterization of the closure of a set of functions. We state it here modified to our setting. + +Theorem F.1 ((Timofte, 2005)). Let $\mathcal{F} \subseteq \mathcal{C}(\mathcal{G}_s, \mathbb{R}^\ell)$ such that there exists a set $\mathcal{S} \subseteq \mathcal{C}(\mathcal{G}_s, \mathbb{R})$ satisfying $\mathcal{S} \cdot \mathcal{F} \subseteq \mathcal{F}$ and $\rho(\mathcal{S}) \subseteq \rho(\mathcal{F})$ . Then, + +$$ +\overline {{\mathcal {F}}} := \left\{f \in \mathcal {C} (\mathcal {G} _ {s}, \mathbb {R} ^ {\ell}) \mid \rho (F) \subseteq \rho (f), \forall (G, \pmb {v}) \in \mathcal {G} _ {s}, f (G, \pmb {v}) \in \overline {{\mathcal {F} (G , \pmb {v})}} \right\}, +$$ + +where $\mathcal{F}(G,\pmb {v})\coloneqq \{h(G,\pmb {v})\mid h\in \mathcal{F}\} \subseteq \mathbb{R}^{\ell}$ . We can equivalently replace $\rho (\mathcal{F})$ by $\rho (S)$ in the expression for $\overline{\mathcal{F}}$ . + +We will use this theorem to show Theorem 6.1 in the setting that $\mathcal{F}$ consists of functions that can be represented in $\mathrm{TL}(\Omega)$ , and more generally, sets of functions that satisfy two conditions, stated below. We more generally allow $\mathcal{F}$ to consist of functions $f:\mathcal{G}_s\to \mathbb{R}^{\ell_f}$ , where the $\ell_f\in \mathbb{N}$ may depend on $f$ . We will require $\mathcal{F}$ to satisfy the following two conditions: + +concatenation-closed: If $f_1 : \mathcal{G}_s \to \mathbb{R}^p$ and $f_2 : \mathcal{G}_s \to \mathbb{R}^q$ are in $\mathcal{F}$ , then $g := (f_1, f_2) : \mathcal{G}_s \to \mathbb{R}^{p + q} : (G, \boldsymbol{v}) \mapsto (f_1(G, \boldsymbol{v}), f_2(G, \boldsymbol{v}))$ is also in $\mathcal{F}$ . + +function-closed: For a fixed $\ell \in \mathbb{N}$ , for any $f\in \mathcal{F}$ such that $f:\mathcal{G}_s\to \mathbb{R}^p$ , also $h\circ f:\mathcal{G}_s\to \mathbb{R}^\ell$ is in $\mathcal{F}$ for any continuous function $h\in \mathcal{C}(\mathbb{R}^p,\mathbb{R}^\ell)$ . + +We denote by $\mathcal{F}_{\ell}$ be the subset of $\mathcal{F}$ of functions from $\mathcal{G}_s$ to $\mathbb{R}^{\ell}$ . + +Theorem 6.1. For any $\ell$ , and any set $\mathcal{F}$ of functions, concatenation and function closed for $\ell$ , we have: $\overline{\mathcal{F}_{\ell}} = \{f:\mathcal{G}_s\to \mathbb{R}^{\ell}\mid \rho_s(\mathcal{F})\subseteq \rho_s(f)\}$ . + +Proof. The proof consists of (i) verifying the existence of a set $\mathcal{S}$ as mentioned Theorem F.1; and of (ii) eliminating the pointwise convergence condition " $\forall (G, \pmb{v}) \in \mathcal{G}_s$ , $f(G, \pmb{v}) \in \overline{\mathcal{F}_{\ell}(G, \pmb{v})}$ in the closure characterization in Theorem F.1. + +For showing (ii) we argue that $\overline{\mathcal{F}_{\ell}(G,\pmb{v})} = \mathbb{R}^{\ell}$ such that the conditions $f(G,\pmb{v}) \in \overline{\mathcal{F}_{\ell}(G,\pmb{v})}$ is automatically satisfied for any $f \in \mathcal{C}(\mathcal{G}_s, \mathbb{R}^\ell)$ . Indeed, take an arbitrary $f \in \mathcal{G}_k \to \mathbb{R}^\ell$ and consider the constant functions $b_i : \mathbb{R}^\ell \to \mathbb{R}^\ell : \pmb{x} \mapsto b_i$ with $b_i \in \mathbb{R}^\ell$ the $i$ th basis vector. Since $\mathcal{F}$ is function-closed for $\ell$ , so is $\mathcal{F}_{\ell}$ . Hence, $b_i := g_i \circ f \in \mathcal{F}_{\ell}$ as well. Furthermore, if $s_a : \mathbb{R}^\ell \to \mathbb{R}^\ell : \pmb{x} \mapsto a \times \pmb{x}$ , for $a \in \mathbb{R}$ , then $s_a \circ f \in \mathcal{F}_{\ell}$ and thus $\mathcal{F}_{\ell}$ is closed under scalar multiplication. Finally, consider $+: \mathbb{R}^{2\ell} \to \mathbb{R}^\ell : (\pmb{x}, \pmb{y}) \mapsto \pmb{x} + \pmb{y}$ . For $f$ and $g$ in $\mathcal{F}_{\ell}$ , $h = (f, g) \in \mathcal{F}$ since $\mathcal{F}$ is concatenation-closed. As a consequence, the function $+ \circ h : \mathcal{G}_s \to \mathbb{R}^\ell$ is in $\mathcal{F}_{\ell}$ , showing that $\mathcal{F}_{\ell}$ is also closed under addition. All combined, this shows that $\mathcal{F}_{\ell}$ is closed under taking linear combinations and since the basis vectors of $\mathbb{R}^\ell$ can be attained, $\overline{\mathcal{F}_{\ell}(G,\pmb{v})} := \mathbb{R}^\ell$ , as desired. + +For (i), we show the existence of a set $\mathcal{S} \subseteq \mathcal{C}(\mathcal{G}_s, \mathbb{R})$ such that $\mathcal{S} \cdot \mathcal{F}_{\ell} \subseteq \mathcal{F}_{\ell}$ and $\rho_s(\mathcal{S}) \subseteq \rho_s(\mathcal{F}_{\ell})$ hold. Similarly as in Azizian & Lelarge (2021), we define + +$$ +\mathcal{S}:= \big\{f\in \mathcal{C}(\mathcal{G}_{s},\mathbb{R}) \big|\underbrace{(f,f,\ldots,f)}_{\ell \text{times}}\in \mathcal{F}_{\ell}\big\} . +$$ + +We remark that for $s \in S$ and $f \in \mathcal{F}_{\ell}$ , $s \cdot f: \mathcal{G}_s \to \mathbb{R}^\ell: (G, \boldsymbol{v}) \mapsto s(G, \boldsymbol{v}) \odot f(G, \boldsymbol{v})$ , with $\odot$ being pointwise multiplication, is also in $\mathcal{F}_{\ell}$ . Indeed, $s \cdot f = \odot \circ (s, f)$ with $(s, f)$ the concatenation of $s$ and $f$ and $\odot: \mathbb{R}^{2\ell} \to \mathbb{R}^\ell: (\boldsymbol{x}, \boldsymbol{y}) \to \boldsymbol{x} \odot \boldsymbol{y}$ being pointwise multiplication. + +It remains to verify $\rho_s(S) \subseteq \rho_s(\mathcal{F}_\ell)$ . Assume that $(G, \pmb{v})$ and $(H, \pmb{w})$ are not in $\rho_s(\mathcal{F}_\ell)$ . By definition, this implies the existence of a function $\hat{f} \in \mathcal{F}_\ell$ such that $\hat{f}(G, \pmb{v}) = \pmb{a} \neq \pmb{b} = \hat{f}(H, \pmb{w})$ with $\pmb{a}, \pmb{b} \in \mathbb{R}^\ell$ . We argue that $(G, \pmb{v})$ and $(H, \pmb{w})$ are also not in $\rho_s(S)$ either. Indeed, Proposition 1 in Maron et al. (2019b) implies that there exists natural numbers $\pmb{\alpha} = (\alpha_1, \dots, \alpha_\ell) \in \mathbb{N}^\ell$ such that the mapping $h_\alpha: \mathbb{R}^\ell \to \mathbb{R}: \pmb{x} \to \prod_{i=1}^\ell x_i^{\alpha_i}$ satisfies $h_\alpha(\pmb{a}) = a \neq b = h_\alpha(\pmb{b})$ , with $a, b \in \mathbb{R}$ . Since $\mathcal{F}$ (and thus also $\mathcal{F}_\ell$ ) is function-closed, $h_\alpha \circ f \in \mathcal{F}_\ell$ for any $f \in \mathcal{F}_\ell$ . In particular, $g := h_\alpha \circ \hat{f} \in \mathcal{F}_\ell$ and concatenation-closure implies that $(g, \dots, g): \mathcal{G}_s \to \mathbb{R}^\ell$ is in $\mathcal{F}_\ell$ too. Hence, $g \in S$ , by definition. It now suffices to observe that $g(G, \pmb{v}) = h_\alpha(\hat{f}(G, \pmb{v})) = a \neq b = h_\alpha(\hat{f}(H, \pmb{w})) = g(H, \pmb{w})$ , and thus $(G, \pmb{v})$ and $(H, \pmb{w})$ are not in $\rho_s(S)$ , as desired. + +When we know more about $\rho_s(\mathcal{F}_\ell)$ we can say a bit more. In the following, we let $\mathrm{alg} \in \{\mathrm{cr}^{(t)}, \mathrm{gcr}^{(t)}, \mathrm{vw}|_k^{(t)}, \mathrm{gw}|_k^{(\infty)}\}$ and only consider the setting where $s$ is either 0 (invariant graph functions) or $s = 1$ (equivariant graph/vertex functions). + +Corollary 6.2. Under the assumptions of Theorem 6.1 and if $\rho(\mathcal{F}_{\ell}) = \rho(\mathrm{alg})$ , then $\overline{\mathcal{F}_{\ell}} = \{f : \mathcal{G}_s \to \mathbb{R}^{\ell} \mid \rho(\mathrm{alg}) \subseteq \rho(f)\}$ . + +Proof. This is just a mere restatement of Theorem 6.1 in which $\rho_s(\mathcal{F}_\ell)$ in the condition $\rho_s(\mathcal{F}_\ell) \subseteq \rho_s(f)$ is replaced by $\rho_s(\mathrm{alg})$ , where $s = 1$ for $\mathrm{alg} \in \{\mathrm{cr}^{(t)}, \mathrm{vw}|_k^{(t)}\}$ and $s = 0$ for $\mathrm{alg} \in \{\mathrm{gcr}^{(t)}, \mathrm{gw}|_k^{(\infty)}\}$ . + +To relate all this to functions representable by tensor languages, we make the following observations. First, if we consider $\mathcal{F}$ to be the set of all functions that can be represented in $\mathrm{GTL}^{(t)}(\Omega)$ , $\mathrm{TLL}_{2}^{(t+1)}(\Omega)$ , $\mathrm{TLL}_{k+1}^{(t)}(\Omega)$ or $\mathrm{TLL}(\Omega)$ , then $\mathcal{F}$ will be automatically concatenation and function-closed, provided that $\Omega$ consists of all functions in $\bigcup_{p} \mathcal{C}(\mathbb{R}^p, \mathbb{R}^\ell)$ . Hence, Theorem 6.1 applies. Furthermore, our results from Section 4 tell us that for all $t \geq 0$ , and $k \geq 1$ , $\rho_1(\mathsf{cr}^{(t)}) = \rho_1(\mathrm{GTL}^{(t)}(\Omega))$ , $\rho_0(\mathsf{gcr}^{(t)}) = \rho_0(\mathrm{TLL}_{2}^{(t+1)}(\Omega)) = \rho_0(\mathsf{gw}_1^{(t)})$ , $\rho_1(\mathsf{ewl}_k^{(t)}) = \rho_1(\mathrm{TLL}_{k+1}^{(t)}(\Omega))$ , and $\rho_0(\mathsf{TLL}_{k+1}(\Omega)) = \rho_0(\mathsf{gw}_k^{(\infty)})$ . As a consequence, Corollary 6.2 applies as well. We thus easily obtain the following characterizations: + +Proposition F.2. For any $t \geq 0$ and $k \geq 1$ : + +- If $\mathcal{F}$ consists of all functions representable in $\mathsf{GTL}^{(t)}(\Omega)$ , then $\overline{\mathcal{F}_{\ell}} = \{f : \mathcal{G}_1 \to \mathbb{R}^\ell \mid \rho_1(\mathsf{cr}^{(t)}) \subseteq \rho_1(f)\}$ ; +- If $\mathcal{F}$ consists of all functions representable in $\mathsf{T}\mathsf{L}_{k + 1}^{(t)}(\Omega)$ , then $\overline{\mathcal{F}_\ell} = \{f:\mathcal{G}_1\to \mathbb{R}^\ell \mid \rho_1(\mathsf{wcl}_k^{(t)})\subseteq \rho_1(f)\}$ ; +- If $\mathcal{F}$ consists of all functions representable in $\mathsf{T}\mathsf{L}_2^{(t + 1)}(\Omega)$ , then $\overline{\mathcal{F}_{\ell}} = \{f:\mathcal{G}_0\to \mathbb{R}^\ell \mid \rho_0(\mathsf{gw}|_1^{(t)})\subseteq \rho_0(f)\}$ ; and finally, +- If $\mathcal{F}$ consists of all functions representable in $\mathsf{T L}_{k + 1}(\Omega)$ , then $\overline{\mathcal{F}_\ell} = \{f:\mathcal{G}_0\to \mathbb{R}^\ell \mid \rho_0\big(\mathsf{gwl}_k^{(\infty)}\big)\subseteq \rho_0(f)\}$ , + +provided that $\Omega$ consists of all functions in $\bigcup_{p}\mathcal{C}(\mathbb{R}^{p},\mathbb{R}^{\ell})$ + +In fact, Lemma 32 in Azizian & Lelarge (2021) implies that we can equivalently populate $\Omega$ with all MLPs instead of all continuous functions. We can thus use MLPs and continuous functions interchangeably when considering the closure of functions. + +At this point, we want to make a comparison with the results and techniques in Azizian & Lelarge (2021). Our proof strategy is very similar and is also based on Theorem F.1. The key distinguishing feature is that we consider functions $f: \mathcal{G}_s \to \mathbb{R}^{\ell_f}$ instead of functions from graphs alone. This has as great advantage that no separate proofs are needed to deal with invariant or equivariant functions. Equivalence incurs quite some complexity in the setting considered in Azizian & Lelarge (2021). A second major difference is that, by considering functions representable in tensor languages, and based on our results from Section 4, we obtain a more fine-grained characterization. Indeed, we obtain characterizations in terms of the number of rounds used in CR and $k$ -WL. In Azizian & Lelarge (2021), $t$ is always set to $\infty$ , that is, an unbounded number of rounds is considered. Furthermore, when it concerns functions $f: \mathcal{G}_1 \to \mathbb{R}^{\ell_f}$ , we recall that CR is different from 1-WL. Only 1-WL is considered in Azizian & Lelarge (2021). Finally, another difference is that we define the equivariant version $\mathsf{w}\mathsf{w}_{k}^{t}$ in a different way than is done in Azizian & Lelarge (2021), because in this way, a tighter connection to logics and tensor languages can be made. In fact, if we were to use the equivariant version of $k$ -WL from Azizian & Lelarge (2021), then we necessarily have to consider an unbounded number of rounds (similarly as in our $\mathsf{g}\mathsf{w}_{k}^{t}$ case). + +We conclude this section by providing a little more details about the consequences of the above results for GNNs. As we already mentioned in Section 6.2, many common GNN architectures are concatenation and function-closed (using MLPs instead of continuous functions). This holds, for example, for the classes $\mathrm{GIN}_{\ell}^{(t)}$ , $\mathrm{eGIN}_{\ell}^{(t)}$ , $k\text{-FGNN}_{\ell}^{(t)}$ and $k\text{-GIN}_{\ell}^{(t)}$ and $k\text{-IGN}_{\ell}^{(t)}$ , as described in Section 5 and further detailed in Section E and D. Here, the subscript $\ell$ refers to the dimension of the embedding space. + +We now consider a function $f$ that is not more separating than $\mathsf{cr}^{(t)}$ (respectively, $\mathsf{gcr}^{(t)}, \mathsf{wcl}_k^{(t)}$ or $\mathsf{gw}_k^{(\infty)}$ , for some $k \geq 1$ ), and want to know whether $f$ can be approximated by a class of GNNs. Proposition F.2 tells that such $f$ can be approximated by a class of GNNs as long as these are at least as separating as $\mathsf{GTL}^{(t)}$ (respectively, $\mathsf{T}\mathsf{L}_{2}^{(t + 1)}, \mathsf{T}\mathsf{L}_{k + 1}^{(t)}$ or $\mathsf{T}\mathsf{L}_{k + 1}^{(\infty)}$ ). This, in turn, amounts showing that the GNNs can be represented in the corresponding tensor language fragment, and that they can match the corresponding labeling algorithm in separation power. We illustrate this for the GNN architectures mentioned above. + +- In Section 5 we showed that $\mathsf{GIN}_{\ell}^{(t)}$ can be represented in $\mathsf{GTL}^{(t)}(\Omega)$ . Theorem 4.3 then implies that $\rho_{1}(\mathsf{cr}^{(t)}) \subseteq \rho_{1}(\mathsf{GIN}_{\ell}^{(t)})$ . Furthermore, Xu et al. (2019) showed that $\rho_{1}(\mathsf{GIN}_{\ell}^{(t)}) \subseteq \rho_{1}(\mathsf{cr}^{(t)})$ . As a consequence, $\rho_{1}(\mathsf{GIN}_{\ell}^{(t)}) = \rho_{1}(\mathsf{cr}^{(t)})$ . We note that the lower bound for GINs only holds when graphs carry discrete labels. The same restriction is imposed in Azizian & Lelarge (2021). +- In Section 5 we showed that $\mathsf{eGIN}_{\ell}^{(t)}$ can be represented in $\mathsf{TL}_2^{(t)}(\Omega)$ . Theorem 4.2 then implies that $\rho_{1}(\mathsf{vwl}_{1}^{(t)}) \subseteq \rho_{1}(\mathsf{eGIN}_{\ell}^{(t)})$ . Furthermore, Barceló et al. (2020) showed that $\rho_{1}(\mathsf{eGIN}_{\ell}^{(t)}) \subseteq \rho_{1}(\mathsf{vwl}_{1}^{(t)})$ . As a consequence, $\rho_{1}(\mathsf{eGIN}_{\ell}^{(t)}) = \rho_{1}(\mathsf{vwl}_{1}^{(t)})$ . Again, the lower bound is only valid when graphs carry discrete labels. +- In Section 5 we mentioned (see details in Section D) that $k$ -FGNN $_{\ell}^{(t)}$ can be represented in $\mathsf{TL}_{k+1}^{(t)}(\Omega)$ . Theorem 4.2 then implies that $\rho_1(\mathsf{vw}|_k^{(t)}) \subseteq \rho_1(k \cdot \mathsf{FGNN}_{\ell}^{(t)})$ . Furthermore, Maron et al. (2019b) showed that $\rho_1(k \cdot \mathsf{FGNN}_{\ell}^{(t)}) \subseteq \rho_1(\mathsf{vw}|_k^{(t)})$ . As a consequence, $\rho_1(k \cdot \mathsf{FGNN}_{\ell}^{(t)}) \subseteq \rho_1(\mathsf{vw}|_k^{(t)})$ . Similarly, $\rho_1((k + 1) \cdot \mathsf{GIN}_{\ell}^{(t)}) = \rho_1(\mathsf{vw}|_k^{(t)})$ for the special class of $(k + 1)$ -IGNs described in Section E. No restrictions are in place for the lower bounds and hence real-valued vertex-labelled graphs can be considered. +- When $\mathsf{GIN}_{\ell}^{(t)}$ or $\mathsf{eGIN}_{\ell}^{(t)}$ are extended with a readout layer, we showed in Section 5 that these can be represented in $\mathsf{T}\mathsf{L}_2^{(t + 1)}(\Omega)$ . Theorem 4.4 and the results by Xu et al. (2019) and Barceló et al. (2020) then imply that $\rho_0(\mathsf{vw}|_1^{(t)})$ and $\rho_0(\mathsf{gcr}^{(t)})$ coincide with the separation power of these architectures with a readout layer. Here again, discrete labels need to be considered. +- Similarly, when $k$ -FGNN or $(k + 1)$ -IGNs are used for graph embeddings, we can represent these in $\mathsf{TL}_{k + 1}(\Omega)$ resulting again that their separation power coincides with that of $\mathsf{gw}|_k^{(\infty)}$ . No restrictions are again in place on the vertex labels. + +So for all these architectures, Corollary 6.2 applies and we can characterize the closures of these architectures in terms of functions that not more separating than their corresponding versions of cr or $k$ -WL, as described in the main paper. In summary, + +Proposition F.3. For any $t \geq 0$ : + +$$ +\overline {{\mathsf {G I N}}} _ {\ell} ^ {(t)} = \left\{f: \mathcal {G} _ {1} \rightarrow \mathbb {R} ^ {\ell} \mid \rho_ {1} (\mathsf {c r} ^ {(t)}) \subseteq \rho_ {1} (f) \right\} = \overline {{\mathsf {G T L}}} ^ {(t)} (\Omega) _ {\ell} +$$ + +$$ +\overline {{\mathsf {e G I N} _ {\ell} ^ {(t)}}} = \left\{f: \mathcal {G} _ {1} \rightarrow \mathbb {R} ^ {\ell} \mid \rho_ {1} (\mathsf {v w l} _ {1} ^ {(t)}) \subseteq \rho_ {1} (f) \right\} = \overline {{\mathsf {T L} _ {2} ^ {(t)} (\Omega) _ {\ell}}} +$$ + +and when extended with a readout layer: + +$$ +\overline {{\mathsf {G I N} _ {\ell} ^ {(t)}}} = \overline {{\mathsf {e G I N} _ {\ell} ^ {(t)}}} = \{f: \mathcal {G} _ {0} \to \mathbb {R} ^ {\ell} \mid \rho_ {0} (\mathsf {g w l} _ {1} ^ {(t)}) \subseteq \rho_ {0} (f) \} = \overline {{\mathsf {T L} _ {2} ^ {(t + 1)} (\Omega) _ {\ell}}}. +$$ + +Furthermore, for any $k \geq 1$ + +$$ +\overline {{k \text {-} F G N N _ {\ell} ^ {(t)}}} = \overline {{k \text {-} G I N _ {\ell} ^ {(t)}}} = \left\{f: \mathcal {G} _ {1} \rightarrow \mathbb {R} ^ {\ell} \mid \rho_ {1} \left(\mathrm {v w l} _ {k} ^ {(t)}\right) \subseteq \rho_ {1} (f) \right\} = \overline {{\mathrm {T L} _ {k + 1} ^ {(t)} (\Omega) _ {\ell}}} +$$ + +$$ +\overline {{(k + 1) - \mathrm {I G N} _ {\ell}}} = \left\{f: \mathcal {G} _ {1} \rightarrow \mathbb {R} ^ {\ell} \mid \rho_ {1} (\mathsf {w l} _ {k} ^ {(\infty)}) \subseteq \rho_ {1} (f) \right\} = \overline {{\mathrm {T L} _ {k + 1} (\Omega) _ {\ell}}} +$$ + +and when converted into graph embeddings: + +$$ +\overline {{k \text {-} F G N N _ {\ell}}} = \overline {{k \text {-} G I N _ {\ell}}} = \overline {{(k + 1) \text {-} I G N _ {\ell}}} = \{f: \mathcal {G} _ {0} \to \mathbb {R} ^ {\ell} \mid \rho_ {0} (\mathsf {g w i} _ {k} ^ {(\infty)}) \subseteq \rho_ {0} (f) \} = \overline {{\mathsf {T L} _ {k + 1} (\Omega) _ {\ell}}}, +$$ + +where the closures of the tensor languages are interpreted as the closure of the graph or graph/vertex functions that they can represent. For results involving GINs or eGINs, the graphs considered should have discretely labeled vertices. + +As a side note, we remark that in order to simulate CR on graphs with real-valued labels, one can use a GNN architecture of the form $\pmb{F}_{v:}^{(t)} = \big(\pmb{F}_{v:}^{(t - 1)},\sum_{u\in N_G(v)}\mathsf{mlp}(\pmb{F}_{u:}^{(t - 1)})\big)$ , which translates in $\mathrm{GTL}^{(t)}(\Omega)$ as expressions of the form + +$$ +\varphi_ {j} ^ {(t)} (x _ {1}) := \left\{ \begin{array}{l l} \varphi_ {j} ^ {(t - 1)} (x _ {1}) & 1 \leq j \leq d _ {t - 1} \\ \sum_ {x _ {2}} E (x _ {1}, x _ {2}) \cdot \mathsf {m l p} _ {j} \big (\varphi_ {1} ^ {(t - 1)} (x _ {1}), \ldots , \varphi_ {d _ {t}} ^ {(t - 1)} (x _ {1}) \big) & d _ {t - 1} < j \leq d _ {t}. \end{array} \right. +$$ + +The upper bound in terms of CR follows from our main results. To show that CR can be simulated, it suffices to observe that one can approximate the function used in Proposition 1 in Maron et al. (2019b) to injectively encode multisets of real vectors by means of MLPs. As such, a continuous version of the first bullet in the previous proposition can be obtained. + +# G DETAILS ON TREEWIDTH AND PROPOSITION 4.5 + +As an extension of our main results in Section 4, we enrich the class of tensor language expressions for which connections to $k$ -WL exist. More precisely, instead of requiring expressions to belong to $\mathsf{TL}_{k + 1}(\Omega)$ , that is to only use $k + 1$ index variables, we investigate when expressions in $\mathsf{TL}(\Omega)$ are semantically equivalent to an expression using $k + 1$ variables. Proposition 4.5 identifies a large class of such expressions, those of treewidth $k$ . As a consequence, even when representing GNN architectures may require more than $k + 1$ index variables, sometimes this number can be reduced. As a consequence of our results, this implies that their separation power is in fact upper bounded by $\ell$ -WL for a smaller $\ell < k$ . Stated otherwise, to boost the separation power of GNNs, the treewidth of the expressions representing the layers of the GNNs must have large treewidth. + +We next introduce some concepts related to treewidth. We here closely follow the exposition given in Abo Khamis et al. (2016) for introducing treewidth by means variable elimination sequences of hypergraphs. + +In this section, we restrict ourselves to summation aggregation. + +# G.1 ELIMINATION SEQUENCES + +We first define elimination sequences for hypergraphs. Later on, we show how to associate such hypergraphs to expressions in tensor languages, allowing us to define elimination sequences for tensor language expressions. + +With a multi-hypergraph $\mathcal{H} = (\mathcal{V},\mathcal{E})$ we simply mean a multiset $\mathcal{E}$ of subsets of vertices $\mathcal{V}$ . An elimination hypergraph sequences is a vertex ordering $\sigma = v_{1},\ldots ,v_{n}$ of the vertices of $\mathcal{H}$ . With such a sequence $\sigma$ , we can associate for $j = n,n - 1,n - 2,\dots,1$ a sequence of $n$ multi-hypergraphs $\mathcal{H}_n^\sigma ,\mathcal{H}_{n - 1}^\sigma ,\ldots ,\mathcal{H}_1^\sigma$ as follows. We define + +$$ +\mathcal {H} _ {n} := \left(\mathcal {V} _ {n}, \mathcal {E} _ {n}\right) := \mathcal {H} +$$ + +$$ +\partial (v _ {n}) := \{F \in \mathcal {E} _ {n} \mid v _ {n} \in F \} +$$ + +$$ +U _ {n} := \bigcup_ {F \in \partial (v _ {n})} F. +$$ + +and for $j = n - 1, n - 2, \ldots, 1$ : + +$$ +\mathcal {V} _ {j} := \{v _ {1}, \dots , v _ {j} \} +$$ + +$$ +\mathcal {E} _ {j} := \left(\mathcal {E} _ {j + 1} \setminus \partial \left(v _ {j + 1}\right)\right) \cup \left\{U _ {j + 1} \setminus \left\{v _ {j + 1} \right\} \right\} +$$ + +$$ +\partial (v _ {j}) := \{F \in \mathcal {E} _ {j} \mid v _ {j} \in F \} +$$ + +$$ +U _ {j} := \bigcup_ {F \in \partial (v _ {j})} F. +$$ + +The induced width on $\mathcal{H}$ by $\sigma$ is defined as $\max_{i\in [n]}|U_i| - 1$ . We further consider the setting in which $\mathcal{H}$ has some distinguished vertices. As we will see shortly, these distinguished vertices correspond to the free index variables of tensor language expressions. Without loss of generality, we assume that the distinguished vertices are $v_{1}, v_{2}, \ldots, v_{f}$ . When such distinguished vertices are present, an elimination sequence is just as before, except that the distinguished vertices come first in the sequence. If $v_{1}, \ldots, v_{f}$ are the distinguished vertices, then we define the induced width of the sequence as $f + \max_{f+1 \leq i \leq n} |U_i \setminus \{v_1, \ldots, v_f\}| - 1$ . In other words, we count the number of distinguished vertices, and then augment it with the induced width of the sequence, starting from $v_{f+1}$ to $v_{n}$ , hereby ignoring the distinguished variables in the $U_i$ 's. One could, more generally, also try to reduce the number of free index variables but we assume that this number is fixed, similarly as how GNNs operate. + +# G.2 CONJUNCTIVE TL EXPRESSIONS AND TREEWIDTH + +We start by considering a special form of TL expressions, which we refer to as conjunctive TL expressions, in analogy to conjunctive queries in database research and logic. A conjunctive TL expression is of the form + +$$ +\varphi (\boldsymbol {x}) = \sum_ {\boldsymbol {y}} \psi (\boldsymbol {x}, \boldsymbol {y}). +$$ + +where $\pmb{x}$ denote the free index variables, $\pmb{y}$ contains all index variables under the scope of a summation, and finally, $\psi(\pmb{x},\pmb{y})$ is a product of base predicates in TL. That is, $\psi(\pmb{x},\pmb{y})$ is a product of $E(z_{i},z_{j})$ and $P_{\ell}(z_{i})$ with $z_{i},z_{j}$ variables in $\pmb{x}$ or $\pmb{y}$ . With such a conjunctive TL expression, one can associate a multi-hypergraph in a canonical way (Abo Khamis et al., 2016). More precisely, given a conjunctive TL expression $\varphi(\pmb{x})$ we define $\mathcal{H}_{\varphi}$ as: + +- $\mathcal{V}_{\varphi}$ consists of all index variables in $x$ and $y$ ; +- $\mathcal{E}_{\varphi}$ : for each atomic base predicate $\tau$ in $\psi$ we have an edge $F_{\tau}$ containing the indices occurring in the predicate; and +- the vertices corresponding to the free index variables $x$ form the distinguishing set of vertices. + +We now define an elimination sequence for $\varphi$ as an elimination sequence for $\mathcal{H}_{\varphi}$ taking the distinguished vertices into account. The following observation ties elimination sequences of $\varphi$ to the number of variables needed to express $\varphi$ . + +Proposition G.1. Let $\varphi(\pmb{x})$ be a conjunctive TL expression for which an elimination sequence of induced with $k - 1$ exists. Then $\varphi(\pmb{x})$ is equivalent to an expression $\tilde{\varphi}(\pmb{x})$ in $\mathsf{TL}_k$ . + +Proof. We show this by induction on the number of vertices in $\mathcal{H}_{\varphi}$ which are not distinguished. For the base case, all vertices are distinguished and hence $\varphi(\boldsymbol{x})$ does not contain any summation and is an expression in $\mathsf{T}\mathsf{L}_k$ itself. + +Suppose that in $\mathcal{H}_{\varphi}$ there are $p$ undistinguished vertices. That is, + +$$ +\varphi (\boldsymbol {x}) = \sum_ {y _ {1}} \dots \sum_ {y _ {P}} \psi (\boldsymbol {x}, \boldsymbol {y}). +$$ + +By assumption, we have an elimination sequence of the undistinguished vertices. Assume that $y_{p}$ is first in this ordering. Let us write + +$$ +\begin{array}{l} \varphi (\boldsymbol {x}) = \sum_ {y _ {1}} \dots \sum_ {y _ {p}} \psi (\boldsymbol {x}, \boldsymbol {y}) \\ = \sum_ {y _ {1}} \dots \sum_ {y _ {p - 1}} \psi_ {1} (\boldsymbol {x}, \boldsymbol {y} \backslash y _ {p}) \cdot \sum_ {y _ {p}} \psi_ {2} (\boldsymbol {x}, \boldsymbol {y}) \\ \end{array} +$$ + +where $\psi_{1}$ is the product of predicates corresponding to the edges $F\in \mathcal{E}_{\varphi}\setminus \partial (y_{p})$ , that is, those not containing $y_{p}$ , and $\psi_{2}$ is the product of all predicates corresponding to the edges $F\in \partial (y_{p})$ , that is, those containing the predicate $y_{p}$ . Note that, because of the induced width of $k - 1$ , $\sum_{y_p}\psi_2(\boldsymbol {x},\boldsymbol {y})$ contains all indices in $U_{p}$ which is of size $\leq k$ . We now replace the previous expression with another expression + +$$ +\varphi^ {\prime} (\boldsymbol {x}) = \sum_ {y _ {1}} \dots \sum_ {y _ {p - 1}} \psi_ {1} (\boldsymbol {x}, \boldsymbol {y} \setminus y _ {p}) \cdot R _ {p} (\boldsymbol {x}, \boldsymbol {y}) +$$ + +Where $R_{p}$ is regarded as an $|U_p| - 1$ -ary predicate over the indices in $U_{p} \setminus y_{p}$ . It is now easily verified that $\mathcal{H}_{\varphi'}$ is the hypergraph $\mathcal{H}_{p-1}$ corresponding to the variable ordering $\sigma$ . We note that this is a hypergraph over $p-1$ undistinguished vertices. We can apply the induction hypothesis and replace $\varphi'(\boldsymbol{x})$ with its equivalent expression $\tilde{\varphi}'(\boldsymbol{x})$ in $\mathsf{T}\mathsf{L}_k$ . To obtain the expression $\tilde{\varphi}(\boldsymbol{x})$ of $\varphi(\boldsymbol{x})$ , it now remains to replace the new predicate $R_{p}$ with its defining expression. We note again that $R_{p}$ contains at most $k-1$ indices, so it will occur in $\tilde{\varphi}'(\boldsymbol{x})$ in the form $R_{p}(\boldsymbol{x}, \boldsymbol{z})$ where $|\boldsymbol{z}| \leq k-1$ . In other words, one of the variables in $\boldsymbol{z}$ is not used, say $z_s$ , and we can simply replace $R_{p}(\boldsymbol{x}, \boldsymbol{z})$ by $\sum_{z_s} \psi_x(\boldsymbol{x}, \boldsymbol{z}, z_s)$ . + +As a consequence, one way of showing that a conjunctive expression $\varphi(\pmb{x})$ in $\mathsf{TL}$ is equivalently expressible in $\mathsf{TL}_k$ , is to find an elimination sequence of induced width $k - 1$ . This in turn is equivalent to $\mathcal{H}_{\varphi}$ having a treewidth of $k - 1$ , as is shown, e.g., in Abo Khamis et al. (2016). As usual, we define the treewidth of a conjunctive expression $\varphi(\pmb{x})$ in $\mathsf{TL}$ as the treewidth of its associated hypergraph $\mathcal{H}_{\varphi}$ . + +We recall the definition of treewidth (modified to our setting): A tree decomposition $T = (V_T, E_T, \xi_T)$ of $\mathcal{H}_{\varphi}$ with $\xi_T : V_T \to 2^{\nu}$ is such that + +- For any $F \in \mathcal{E}$ , there is a $t \in V_T$ such that $F \subseteq \xi_T(t)$ ; and +- For any $v \in \mathcal{V}$ corresponding to a non-distinguished index variable, the set $\{t \mid t \in V_T, v \in \xi(t)\}$ is not empty and forms a connected sub-tree of $T$ . + +The width of a tree decomposition $T$ is given by $\max_{t\in V_T}|\xi_T(t)| - 1$ . Now the treewidth of $\mathcal{H}_{\varphi}$ , $\mathrm{tw}(\mathcal{H})$ is the minimum width of any of its tree decompositions. We denote by $\mathrm{tw}(\varphi)$ the treewidth of $\mathcal{H}_{\varphi}$ . Again, similar modifications are used when distinguished vertices are in place. Referring again to Abo Khamis et al. (2016), $\mathrm{tw}(\varphi) = k - 1$ is equivalent to having a variable elimination sequence for $\varphi$ of an induced width of $k - 1$ . Hence, combining this observation with Proposition G.1 results in: + +Corollary G.2. Let $\varphi(\pmb{x})$ be a conjunctive TL expression of treewidth $k - 1$ . Then $\varphi(\pmb{x})$ is equivalent to an expression $\tilde{\varphi}(\pmb{x})$ in $\mathsf{T}\mathsf{L}_k$ . + +That is, we have established Proposition 4.5 for conjunctive TL expressions. We next lift this to arbitrary $\mathsf{TL}(\Omega)$ expressions. + +# G.3 ARBITRARY TL(Ω) EXPRESSIONS + +First, we observe that any expression in TL can be written as a linear combination of conjunctive expressions. This readily follows from the linearity of the operations in TL and that equality and + +inequality predicates can be eliminated. More specifically, we may assume that $\varphi(\pmb{x})$ in TL is of the form + +$$ +\sum_ {\alpha \in A} a _ {\alpha} \psi_ {\alpha} (\boldsymbol {x}, \boldsymbol {y}), +$$ + +with $A$ finite set of indices and $a_{\alpha} \in \mathbb{R}$ , and $\psi_{\alpha}(\pmb{x}, \pmb{y})$ conjunctive TL expressions. We now define + +$$ +\operatorname {t w} (\varphi) := \max \left\{\operatorname {t w} \left(\psi_ {\alpha}\right) \mid \alpha \in A \right\} +$$ + +for expressions in TL. To deal with expressions in $\mathsf{TL}(\Omega)$ that may contain function application, we define $\mathrm{tw}(\varphi)$ as the maximum treewidth of the expressions: (i) $\varphi_{\mathrm{nofun}}(\pmb {x})\in \mathsf{TL}$ obtained by replacing each top-level function application $f(\varphi_1,\ldots ,\varphi_p)$ by a new predicate $R_{f}$ with free indices $\mathrm{free}(\varphi_1)\cup \dots \cup \mathrm{free}(\varphi_p)$ ; and (ii) all expressions $\varphi_{1},\ldots ,\varphi_{p}$ occurring in a top-level function application $f(\varphi_1,\ldots ,\varphi_p)$ in $\varphi$ . We note that these expression either have no function applications (as in (i)) or have function applications of lower nesting depth (in $\varphi$ , as in (ii)). In other words, applying this definition recursively, we end up with expressions with no function applications, for which treewidth was already defined. With this notion of treewidth at hand, Proposition 4.5 now readily follows. + +# H HIGHER-ORDER MPNNs + +We conclude the supplementary material by elaborating on $k$ -MPNNs and by relating them to classical MPNNs (Gilmer et al., 2017). As underlying tensor language we use $\mathsf{T}\mathsf{L}_{k + 1}(\Omega ,\Theta)$ which includes arbitrary functions $(\Omega)$ and aggregation functions $(\Theta)$ , as defined in Section C.5. + +We recall from Section 3 that $k$ -MPNNs refer to the class of embeddings $f: \mathcal{G}_s \to \mathbb{R}^\ell$ for some $\ell \in \mathbb{N}$ that can be represented in $\mathsf{T L}_{k+1}(\Omega, \Theta)$ . When considering an embedding $f: \mathcal{G}_s \to \mathbb{R}^\ell$ , the notion of being represented is defined in terms of the existence of $\ell$ expressions in $\mathsf{T L}_{k+1}(\Omega, \Theta)$ , which together provide each of the $\ell$ components of the embedding in $\mathbb{R}^\ell$ . We remark, however, that we can alternatively include concatenation in tensor language. As such, we can concatenate $\ell$ separate expressions into a single expression. As a positive side effect, for $f: \mathcal{G}_s \to \mathbb{R}^\ell$ to be represented in tensor language, we can then simply define it by requiring the existence of a single expression, rather than $\ell$ separate ones. This results in a slightly more succinct way of reasoning about $k$ -MPNNs. + +In order to reason about $k$ -MPNNs as a class of embeddings, we can obtain an equivalent definition for the class of $k$ -MPNNs by inductively stating how new embeddings are computed out of old embeddings. Let $X = \{x_{1},\ldots ,x_{k + 1}\}$ be a set of $k + 1$ distinct variables. In the following, $\pmb{v}$ denotes a tuple of vertices that have at least as many components as the highest index of variables used in expressions. Intuitively, variable $x_{j}$ refers to the $j$ th component in $\pmb{v}$ . We also denote the image of a graph $G$ and tuple $\pmb{v}$ by an expression $\varphi$ , i.e., the semantics of $\varphi$ given $G$ and $\pmb{v}$ , as $\varphi (G,\pmb{v})$ rather than by $[[\varphi ,\pmb{v}]]_G$ . We further simply refer to embeddings rather than expressions. + +We first define "atomic" $k$ -MPNN embeddings which extract basic information from the graph $G$ and the given tuple $v$ of vertices. + +- Label embeddings of the form $\varphi(x_i) \coloneqq \mathsf{P}_s(x_i)$ , with $x_i \in X$ , and defined by $\varphi(G, \mathbf{v}) \coloneqq (\operatorname{col}_G(v_i))_s$ , are $k$ -MPNNs; +- Edge embeddings of the form $\varphi(x_i, x_j) \coloneqq \mathsf{E}(x_i, x_j)$ , with $x_i, x_j \in X$ , and defined by + +$$ +\varphi (G, \boldsymbol {v}) := \left\{ \begin{array}{l l} 1 & \text {i f v _ {i} v _ {j} \in E _ {G}} \\ 0 & \text {o t h e r w i s e ,} \end{array} \right. +$$ + +are $k$ -MPNNs; and + +- (Dis-)equality embeddings of the form $\varphi(x_i, x_j) := 1_{x_i \circ p x_j}$ , with $x_i, x_j \in X$ , and defined by + +$$ +\varphi (G, \boldsymbol {v}) := \left\{ \begin{array}{l l} 1 & \text {i f v _ {i} o p v _ {j}} \\ 0 & \text {o t h e r w i s e ,} \end{array} \right. +$$ + +are $k$ -MPNNs. + +We next inductively define new $k$ -MPNNs from "old" $k$ -MPNNs. That is, given $k$ -MPNNs $\varphi_1(\pmb{x}_1), \dots, \varphi_\ell(\pmb{x}_\ell)$ , the following are also $k$ -MPNNs: + +- Function applications of the form $\varphi(\boldsymbol{x}) \coloneqq \mathbf{f}(\varphi_1(\boldsymbol{x}_1), \ldots, \varphi_\ell(\boldsymbol{x}_\ell)$ are $k$ -MPNNs, where $\boldsymbol{x} = \boldsymbol{x}_1 \cup \dots \cup \boldsymbol{x}_\ell$ , and defined by + +$$ +\varphi (G, \boldsymbol {v}) := \mathbf {f} \left(\varphi_ {1} (G, \boldsymbol {v} | _ {\boldsymbol {x} _ {1}}), \dots , \varphi_ {\ell} (G, \boldsymbol {v} | _ {\boldsymbol {x} _ {\ell}})\right). +$$ + +Here, if $\varphi_i(G,\pmb{v}|_{\pmb{x}_i})\in \mathbb{R}^{d_i}$ , then $\mathbf{f}:\mathbb{R}^{d_1}\times \dots \times \mathbb{R}^{d_\ell}\to \mathbb{R}^d$ for some $d\in \mathbb{N}$ . That is, $\varphi$ generates an embedding in $\mathbb{R}^d$ . We remark that our function applications include concatenation. + +- Unconditional aggregations of the form $\varphi(\pmb{x}) \coloneqq \mathsf{agg}_{x_j}^{\mathbf{F}}(\varphi_1(\pmb{x}, x_j))$ are $k$ -MPNNs, where $x_j \in X$ and $x_j \notin \pmb{x}$ , and defined by + +$$ +\varphi (G, \boldsymbol {v}) := \mathbf {F} \big (\{\varphi_ {1} (G, v _ {1}, \dots , v _ {j - 1}, w, v _ {j + 1}, \dots , v _ {k}) \mid w \in V _ {G} \} \big). +$$ + +Here, if $\varphi_{1}$ generates an embedding in $\mathbb{R}^{d_1}$ , then $\mathbf{F}$ is an aggregation function assigning to multiset of vectors in $\mathbb{R}^{d_1}$ a vector in $\mathbb{R}^d$ , for some $d\in \mathbb{N}$ . So, $\varphi$ generates an embedding in $\mathbb{R}^d$ . + +- Conditional aggregations of the form $\varphi(x_i) := \mathsf{agg}_{x_j}^{\mathbf{F}}(\varphi_1(x_i, x_j) | E(x_i, x_j))$ are $k$ -MPNNs, with $x_i, x_j \in X$ , and defined by + +$$ +\varphi (G, \boldsymbol {v}) := \mathbf {F} \left(\left\{\left. \varphi_ {1} (G, v _ {i}, w) \mid w \in N _ {G} (v _ {i}) \right\} \right\}\right). +$$ + +As before, if $\varphi_{1}$ generates an embedding in $\mathbb{R}^{d_1}$ , then $\mathbf{F}$ is an aggregation function assigning to multisets of vectors in $\mathbb{R}^{d_1}$ a vector in $\mathbb{R}^d$ , for some $d \in \mathbb{N}$ . So again, $\varphi$ generates an embedding in $\mathbb{R}^d$ . + +As defined in the main paper, we also consider the subclass $k$ -MPNNs $^{(t)}$ by only considering $k$ -MPNNs defined in terms of expressions of aggregation depth at most $t$ . Our main results, phrased in terms of $k$ -MPNNs are: + +$$ +\rho_ {1} \left(\mathsf {v w} | _ {k} ^ {(t)}\right) = \rho_ {1} \left(k - \mathsf {M P N N s} ^ {(t)}\right) \text {a n d} \rho_ {0} (\mathsf {g w} | _ {k}) = \rho_ {0} (k - \mathsf {M P N N s}). +$$ + +Hence, if the embeddings computed by GNNs are $k$ -MPNNs, one obtains an upper bound on the separation power in terms of $k$ -WL. + +The classical MPNNs (Gilmer et al., 2017) are subclass of 1-MPNNs in which no unconditional aggregation can be used and furthermore, function applications require input embeddings with the same single variable ( $x_{1}$ or $x_{2}$ ), and only $\mathbf{1}_{x_i = x_i}$ and $\mathbf{1}_{x_i\neq x_i}$ are allowed. In other words, they correspond to guarded tensor language expressions (Section 4.2). We denote this class of 1-MPNNs by MPNNs and by $\mathrm{MPNNs}^{(t)}$ when restrictions on aggregation depth are in place. And indeed, the classical way of describing MPNNs as + +$$ +\varphi^ {(0)} (x _ {1}) = \left(P _ {1} (x _ {1}), \dots , P _ {\ell} (x _ {1})\right) +$$ + +$$ +\varphi^ {(t)} (x _ {1}) = \mathbf {f} ^ {(t)} \Big (\varphi^ {(t - 1)} (x _ {1}), \mathsf {a g g r} _ {x _ {2}} ^ {\mathbf {F} ^ {(t)}} \left(\varphi^ {(t - 1)} (x _ {1}), \varphi^ {(t - 1)} (x _ {2}) | E (x _ {i}, x _ {j})\right) \Big) +$$ + +correspond to 1-MPNNs that satisfy the above mentioned restrictions. Without readouts, MPNNs compute vertex embeddings and hence, our results imply + +$$ +\rho_ {1} \left(\operatorname {c r} ^ {(t)}\right) = \rho_ {1} \left(\operatorname {M P N N S} ^ {(t)}\right). +$$ + +Furthermore, MPNNs with a readout function fall into the category of 1-MPNNs: + +$$ +\varphi := \operatorname {a g g r} _ {x _ {1}} ^ {\text {r e a d o u t}} \left(\varphi^ {(t)} (x _ {1})\right) +$$ + +where unconditional aggregation is used. Hence, + +$$ +\rho_ {0} \left(\mathbf {g c r} ^ {(t)}\right) = \rho_ {0} \left(\mathbf {g w l} _ {1} ^ {(t)}\right) = \rho_ {0} \left(1 - \mathbf {M P N N s} ^ {(t + 1)}\right). +$$ + +We thus see that $k$ -MPNNs gracefully extend MPNNs and can be used for obtaining upper bounds on the separation power of classes of GNNs. \ No newline at end of file diff --git a/expressivenessandapproximationpropertiesofgraphneuralnetworks/images.zip b/expressivenessandapproximationpropertiesofgraphneuralnetworks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5f0e695b12e0dd841a6b8878deeed0784e0952a2 --- /dev/null +++ b/expressivenessandapproximationpropertiesofgraphneuralnetworks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd9f3bd14267523ce226b1ad4b9904d0e190859b895ebab9e0fc4fb0adb2f7ef +size 1047754 diff --git a/expressivenessandapproximationpropertiesofgraphneuralnetworks/layout.json b/expressivenessandapproximationpropertiesofgraphneuralnetworks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..25b465c0811f779875a5fcbb12fe0053aa827d9c --- /dev/null +++ b/expressivenessandapproximationpropertiesofgraphneuralnetworks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36fd1851eba741915cdbd482abb673ef5397acbc3c12ef120b234c039b0941f8 +size 3165078 diff --git a/extendingthewildsbenchmarkforunsupervisedadaptation/fa401570-a53f-4c34-956c-8447c5014fb3_content_list.json b/extendingthewildsbenchmarkforunsupervisedadaptation/fa401570-a53f-4c34-956c-8447c5014fb3_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5b8b406a678292532abde528c0ad3e80e92f8678 --- /dev/null +++ b/extendingthewildsbenchmarkforunsupervisedadaptation/fa401570-a53f-4c34-956c-8447c5014fb3_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:09913c42c23785841000acf879455f76702ca4df026634975dd702a2fce62913 +size 295231 diff --git a/extendingthewildsbenchmarkforunsupervisedadaptation/fa401570-a53f-4c34-956c-8447c5014fb3_model.json b/extendingthewildsbenchmarkforunsupervisedadaptation/fa401570-a53f-4c34-956c-8447c5014fb3_model.json new file mode 100644 index 0000000000000000000000000000000000000000..862e5b682d0a5b784ed7a5993557e0cdbd9bf899 --- /dev/null +++ b/extendingthewildsbenchmarkforunsupervisedadaptation/fa401570-a53f-4c34-956c-8447c5014fb3_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1bc200a5743ac921027db5799f61505f119d29deff1e653ba1e9f8c46400515b +size 368633 diff --git a/extendingthewildsbenchmarkforunsupervisedadaptation/fa401570-a53f-4c34-956c-8447c5014fb3_origin.pdf b/extendingthewildsbenchmarkforunsupervisedadaptation/fa401570-a53f-4c34-956c-8447c5014fb3_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c560901bfe03544ee0e55129ed47e272b931c263 --- /dev/null +++ b/extendingthewildsbenchmarkforunsupervisedadaptation/fa401570-a53f-4c34-956c-8447c5014fb3_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f800e0ffe847da7387326b2fd7646f2eff24490e0d3e39c7c4276d45cfaaf26c +size 3861877 diff --git a/extendingthewildsbenchmarkforunsupervisedadaptation/full.md b/extendingthewildsbenchmarkforunsupervisedadaptation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..19ef6486a3bf1d51e82b1b7ac8352e69928297d3 --- /dev/null +++ b/extendingthewildsbenchmarkforunsupervisedadaptation/full.md @@ -0,0 +1,1097 @@ +# EXTENDING THE WILDS BENCHMARK FOR UNSUPERVISED ADAPTATION + +Shiori Sagawa $^{1*}$ , Pang Wei Koh $^{1*}$ , Tony Lee $^{1*}$ , Irena Gao $^{1*}$ , Sang Michael Xie $^{1}$ , Kendrick Shen $^{1}$ , Ananya Kumar $^{1}$ , Weihua Hu $^{1}$ , Michihiro Yasunaga $^{1}$ , Henrik Marklund $^{1}$ , Sara Beery $^{2}$ , Etienne David $^{3}$ , Ian Stavness $^{4}$ , Wei Guo $^{5}$ , Jure Leskovec $^{1}$ , Kate Saenko $^{6}$ , Tatsunori Hashimoto $^{1}$ , Sergey Levine $^{7}$ , Chelsea Finn $^{1}$ , Percy Liang $^{1}$ $^{*}$ Equal contribution $^{1}$ Stanford University $^{2}$ Caltech $^{3}$ INRAE $^{4}$ University of Saskatchewan $^{5}$ University of Tokyo $^{6}$ Boston University $^{7}$ University of California, Berkeley + +# ABSTRACT + +Machine learning systems deployed in the wild are often trained on a source distribution but deployed on a different target distribution. Unlabeled data can be a powerful point of leverage for mitigating these distribution shifts, as it is frequently much more available than labeled data and can often be obtained from distributions beyond the source distribution as well. However, existing distribution shift benchmarks with unlabeled data do not reflect the breadth of scenarios that arise in real-world applications. In this work, we present the WILDS 2.0 update, which extends 8 of the 10 datasets in the WILDS benchmark of distribution shifts to include curated unlabeled data that would be realistically obtainable in deployment. These datasets span a wide range of applications (from histology to wildlife conservation), tasks (classification, regression, and detection), and modalities (photos, satellite images, microscope slides, text, molecular graphs). The update maintains consistency with the original WILDS benchmark by using identical labeled training, validation, and test sets, as well as identical evaluation metrics. We systematically benchmark state-of-the-art methods that use unlabeled data, including domain-invariant, self-training, and self-supervised methods, and show that their success on WILDS is limited. To facilitate method development, we provide an open-source package that automates data loading and contains the model architectures and methods used in this paper. Code and leaderboards are available at https://wilds.stanford.edu. + +# 1 INTRODUCTION + +Distribution shifts—when models are trained on a source distribution but deployed on a different target distribution—are frequent problems for machine learning systems in the wild (Quinonero-Candela et al., 2009; Geirhos et al., 2020; Koh et al., 2021). In this paper, we focus on the use of unlabeled data to mitigate these shifts. Unlabeled data is a powerful point of leverage as it is more readily available than labeled data and can often be obtained from distributions beyond the source distribution. For example, in the crop detection task in Figure 1, we wish to learn a model that can extrapolate to a set of target domains (farms) (David et al., 2020), and while we only have labeled training examples from some source domains, we have many more unlabeled examples from the source domains, from extra domains, and even directly from the target domains. + +Many methods for leveraging unlabeled data have been highly successful on some types of distribution shifts (Berthelot et al., 2021; Zhang et al., 2021). However, the datasets typically used for evaluating these methods do not reflect many of the realistic shifts that might occur in the wild. These evaluations tend instead to focus on shifts between photos and stylized versions like sketches (Li et al., 2017; Venkateswara et al., 2017; Peng et al., 2019) or synthetic renderings (Peng et al., 2018), or between variants of digits datasets like MNIST (LeCun et al., 1998) and SVHN (Netzer et al., 2011). Unfortunately, prior work has shown that methods that work well on one type of shift need not generalize to others (Taori et al., 2020; Djolonga et al., 2020; Xie et al., 2021a; Miller et al., 2021), which raises the question of how well they would work on a wider array of realistic shifts. + +![](images/b8d6e301e15ec30337ca4a153ab48cef0b9c66d21b25ca56c6afdf885b7750d1.jpg) +Figure 1: Each WILDS dataset (Koh et al., 2021) contains labeled data from the source domains (for training), validation domains (for hyperparameter selection), and target domains (for held-out evaluation). In the WILDS 2.0 update, we extend these datasets with unlabeled data from a combination of source, validation, or target domains, as well as extra domains from which there is no labeled data. The labeled data is exactly the same as in WILDS 1.0. In this figure, we illustrate the setting with the GLOBALWHEAT-WILDS dataset, where domains correspond to images acquired from different locations and at different times. + +In this paper, we make two contributions. First, we present WILDS 2.0 (Figure 2), an updated version of the recent WILDS benchmark of in-the-wild distribution shifts (Koh et al., 2021). WILDS datasets span a wide range of tasks and modalities, and each dataset reflects a domain generalization or subpopulation shift setting with a substantial gap between in-distribution and out-of-distribution performance. However, WILDS 1.0 only contained labeled data, which limits the leverage for learning robust models. In WILDS 2.0, we extend 8 of the 10 WILDS datasets1 with curated unlabeled data acquired from the same source and target domains as the labeled data, as well as from extra domains of the same type: e.g., in the GLOBALWHEAT-WILDS dataset pictured in Figure 1, we acquired unlabeled photos of wheat fields from the source and target farms as well as extra farms that were not in the original labeled dataset. In total, WILDS 2.0 adds 14.5 million unlabeled examples, expanding the number of examples for each dataset by $3 - 13 \times$ and allowing us to combine the real-world relevance of WILDS with the leverage of unlabeled data. + +Second, we developed a standardized and consistent protocol for evaluating methods that leverage the unlabeled data in WILDS 2.0. We assessed representatives from three popular categories: methods for learning domain-invariant representations (Sun & Saenko, 2016; Ganin et al., 2016), self-training methods (Lee, 2013; Sohn et al., 2020; Xie et al., 2020), and pre-training methods that rely on self-supervision (Devlin et al., 2019; Caron et al., 2020). These methods have been successful on some types of shifts, such as going from photos to sketches, or from handwritten digits to street signs (Berthelot et al., 2021; Zhang et al., 2021). + +Our results on WILDS are mixed: many methods did not outperform standard supervised training despite using additional unlabeled data, and the only clear successes were on two image classification datasets (CAMELYON17-WILDS and FMOW-WILDS). Successful methods relied heavily on data augmentation (Xie et al., 2020; Caron et al., 2020), which limited their applicability to modalities where augmentations are not as well developed, such as text and molecular graphs. The same methods were unsuccessful on image regression and detection tasks, which have been relatively understudied: e.g., pseudolabel-based methods do not straightforwardly apply to regression. For the text datasets, continued language model pre-training did not help, unlike in prior work (Gururangan et al., 2020). Our results suggest fruitful avenues for future work, such as developing data augmentations for non-image modalities and more effective hyperparameter tuning protocols. + +Overall, our results underscore the importance of developing and evaluating methods for unlabeled data on a wider variety of real-world shifts than is typically studied. To this end, we have updated the open-source Python WILDS package to include unlabeled data loaders, compatible implementations of all the methods we benchmarked, and scripts to replicate all experiments in this paper (Ap- + +![](images/811da0866497cf5d52524ba2813c6b6e3e091a0537419b5c5a551120dfbe23db.jpg) +Figure 2: The WILDS 2.0 update adds unlabeled data to 8 WILDS datasets. For each dataset, we kept the labeled data from WILDS and expanded the datasets by $3 - 13\times$ with unlabeled data from the same underlying dataset. The type of unlabeled data (i.e., whether it comes from source, extra, validation, or target domains) depends on what is realistic and available for the application. Beyond these 8 datasets, WILDS also contains 2 datasets without unlabeled data: the PY150-wILDS code completion dataset and the RXRX1-wILDS genetic perturbation dataset. For all datasets, the labeled data and evaluation metrics are exactly the same as in WILDS 1.0. Figure adapted with permission from Koh et al. (2021). + +pendix G). Code and public leaderboards are available at https://wilds.stanford.edu. By allowing developers to easily test algorithms across the variety of datasets in WILDS 2.0, we hope to accelerate the development of methods that can leverage unlabeled data to improve robustness to real-world distribution shifts. + +Finally, we note that WILDS 2.0 not a separate benchmark from WILDS 1.0: the labeled data and evaluation metrics are exactly the same in WILDS 1.0 and WILDS 2.0, and future results should be reported on the overall WILDS benchmark, with a note describing what kind of unlabeled data (if any) was used. In this paper, we discuss the addition of unlabeled data and analyze the performance of methods that use the unlabeled data. For a more detailed description of the datasets, evaluation metrics, and models used, please refer to the original WILDS paper (Koh et al., 2021). + +# 2 COMPARISON WITH EXISTING UNSUPERVISED ADAPTATION BENCHMARKS + +WILDS 2.0 offers a diverse range of applications and modalities while also providing an extensive amount of unlabeled data that can be used as leverage for training robust models. In this section, we briefly compare with other existing ML benchmarks for unsupervised adaptation. + +Images. Evaluations of unsupervised adaptation methods for image classification have focused on generalizing from natural photos to a range of stylized images, such as sketches and cartoons (PACS (Li et al., 2017), Office-Home (Venkateswara et al., 2017), and DomainNet (Peng et al., 2019)), product images (Office-31 (Saenko et al., 2010)), and synthetic renderings (VisDA (Peng et al., 2018)), though location-based shifts have also been recently explored (Dubey et al., 2021). It is also popular to evaluate on shifts between digits datasets, such as MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011), and USPS (Hull, 1994). In image detection and segmentation, existing adaptation benchmarks tend to focus on generalizing from synthetic to natural scenes (Ros et al., 2016; Richter et al., 2016; Cordts et al., 2016; Hoffman et al., 2018), which can be an important tool for realistic problems but is not the focus of this work. In contrast, WILDS considers real-world distribution shifts, and it spans diverse modalities (satellite, microscope, agriculture, and camera trap images) and tasks (classification, regression, detection). + +Text. Methods for unsupervised adaptation in NLP are typically evaluated on domain shifts between different textual sources, such as news articles, different categories of product reviews, Wikipedia, + +or social media platforms (Blitzer et al., 2007; Mansour et al., 2009; Oren et al., 2019; Miller et al., 2020; Kamath et al., 2020; Hendrycks et al., 2020), or even more specialized sources such as legal documents (Chalkidis et al., 2020) or biomedical papers (Lee et al., 2020b; Gu et al., 2020). Multilingual tasks can also be a setting for unsupervised adaptation (Conneau et al., 2018; Conneau & Lample, 2019; Hu et al., 2020a; Clark et al., 2020), especially when generalizing to low-resource languages (Nekoto et al., 2020). The WILDS text datasets differ in that they focus on subpopulation performance, either to particular demographics in CIVIL COMMENTS-WILDS or to tail populations in AMAZON-WILDS, rather than on adapting to a completely distinct domain. + +Molecules. While unlabeled molecules have been used for pre-training (Hu et al., 2020c; Rong et al., 2020), no standardized unsupervised adaptation benchmarks have been developed. + +# 3 PROBLEM SETTING + +As in WILDS 1.0, we study the domain shift setting where the data is drawn from domains $d \in \mathcal{D}$ . Each domain $d$ corresponds to a data distribution $P_{d}$ over $(x,y,d)$ , where $x$ is the input, $y$ is the prediction, and all points from $P_{d}$ have domain $d$ . See Koh et al. (2021) for more details. The domains come in four types: + +
Type of domainLabeled dataUnlabeled data
Source domainsUsed for training
Extra domainsNoneCan be used for training, if available
Validation domainsUsed for hyperparameter tuning
Target domainsUsed for held-out evaluation
+ +Table 1: All datasets have labeled source, validation, and target data, as well as unlabeled data from one or more types of domains, depending on what is realistic for the application. + +We consider several variants of the domain shift setting. In some applications, all four types of domains are disjoint (e.g., if we are training on labeled data from some hospitals but seeking to generalize to new hospitals); in others, the target domains are a subset of the source domains (e.g., if we are training on a heterogeneous dataset but seeking to measure model performance on particular demographic subpopulations). Models are trained on labeled data from the source domains, as well as unlabeled data of one or more types of domains, depending on what is realistic for the application. + +# 4 DATASETS + +WILDS 2.0 augments 8 WILDS datasets with curated unlabeled data. For consistency, the labeled datasets and evaluation metrics are exactly the same as in WILDS 1.0, which allows direct evaluations of the utility of unlabeled training data. The labeled and unlabeled data are disjoint, e.g., the unlabeled target data is different from the labeled target data used for evaluation. Here, we briefly describe each dataset, why unlabeled data can be realistically obtained for the corresponding task, and how it might help. In Appendix A, we provide more information on each dataset, including data provenance and details on data processing. In general, all of the unlabeled datasets in WILDS 2.0 were processed in a similar way as their corresponding labeled datasets from WILDS 1.0. + +IWILDCAM2020-WILDS: Species classification across different camera traps. The task is to classify the animal species in a camera trap image (Beery et al., 2020). We aim to generalize to new camera trap locations despite variations in illumination, background, and label frequencies (Beery et al., 2018). While hundreds of thousands of camera traps are active worldwide, only a small subset of these traps have had images labeled, and the unlabeled data from the other camera traps capture diverse operating conditions that can be used to learn robust models. In this work, we add unlabeled images from 3,215 extra camera traps also in the WCS Camera Traps dataset (Beery et al., 2020). This expands the number of camera traps by $11 \times$ and the number of examples by $5 \times$ . + +CAMELYON17-WILDS: Tumor identification across different hospitals. The task is to classify image patches from lymph node sections as tumor or normal tissue. We seek to generalize to new hospitals, which can differ in their patient demographics and data acquisition protocols (Veta + +et al., 2016; AlBadawy et al., 2018; Komura & Ishikawa, 2018; Tellez et al., 2019). While obtaining labeled data for histopathology applications requires pain-staking annotations from expert pathologists, hospitals typically accumulate unlabeled slide images during normal operation. These unlabeled images could be used to adapt to differences between hospitals (e.g., different staining protocols might lead to different color distributions). We provide unlabeled patches from train and test hospitals, which expands the total number of patches by $7.5 \times$ . Both the labeled and unlabeled data are adapted from the Camelyon17 dataset (Bandi et al., 2018). + +FMOW-WILDS: Land use classification across different regions and years. The task is to classify the type of building or land usage in a satellite image. Given training data from before 2013, we aim to generalize to satellite imagery taken after 2013, while maintaining high accuracy across all geographic regions. While labeling land use requires combining map data and expert annotations, unlabeled data is available in all locations in the world through constant streams of global satellite imagery. Prior work has shown that unlabeled satellite data can improve OOD accuracy in landcover and cropland prediction (Xie et al., 2021a) as well as aerial object and scene classification (Reed et al., 2021). We provide unlabeled satellite imagery across all regions from the train and test timeframes defined in WILDS, expanding the dataset by $3.5 \times$ . Both the labeled and unlabeled data are adapted from the FMoW dataset (Christie et al., 2018). + +**POVERTYMAP-WILDS:** Poverty mapping across different countries. The task is to predict a real-valued asset wealth index of the area in a satellite image. We consider generalizing across different countries. Like FMOW-WILDS, unlabeled satellite imagery is available globally, while labeled data is expensive to collect as it requires conducting nationally representative surveys in the field. Prior work on poverty prediction has used unlabeled data for entropy minimization (Jean et al., 2018) and pre-training on auxiliary tasks such as nighttime light prediction (Xie et al., 2016; Jean et al., 2016), but these studies do not study generalization to new countries. We provide unlabeled satellite imagery from both train and test countries, expanding the dataset by $14 \times$ . Both the labeled and unlabeled data are adapted from Yeh et al. (2020). + +GLOBALWHEAT-WILDS: Wheat head detection across different regions. The task is to localize wheat heads in overhead field images. We seek to generalize across image acquisition sessions, each of which represents a particular location, time, and sensor; these can differ in wheat genotype, wheat head appearance, growing conditions, background appearance, illumination, and acquisition protocols. Wheat field images contain many densely packed and overlapping instances, making labeling wheat heads in images costly, tedious and sensitive to the individual annotator. However, hundreds of agricultural research institutes around the world collect terabytes of unlabeled field images which could be used for training. We add unlabeled field images from train, test, and extra acquisition sessions, expanding the dataset by $10\times$ . The labeled and unlabeled data are adapted from the Global Wheat Head Detection dataset and its underlying sources (David et al., 2020; 2021). + +OGB-MOLPCBA: Molecular property prediction across different scaffolds. The task is to predict the biological activity of small molecules represented as molecular graphs (Wu et al., 2018; Hu et al., 2020b). We seek to generalize to molecules with new scaffold structures. Labels on biological activity are only available for a small portion of molecules, as they require expensive lab experiments to obtain. However, unlabeled molecule structures are readily available in large-scale chemical databases such as PubChem (Bolton et al., 2008), and have been previously used for pretraining (Hu et al., 2020c) and semi-supervised learning (Sun et al., 2020). We provide 5 million unlabeled molecules from source and target scaffolds, which expands the number of molecules by $12.5 \times$ . The original labeled data was curated by MoleculeNet (Wu et al., 2018) from PubChem, and we similarly extracted the unlabeled data from PubChem (Bolton et al., 2008). + +CIVIL COMMENTS-WILDS: Toxicity classification across demographic identities. The task is to classify whether a text comment is toxic or not. We consider the subpopulation shift setting, where the model must classify accurately across groups of comments mentioning different demographic identities. While labels require large-scale crowdsourcing annotations on both comment toxicity, unlabeled article comments are widely available on the internet. We provide unannotated comments as unlabeled data, which expands the size of the dataset by $4.5 \times$ . Both the labeled and unlabeled data are adapted from Borkan et al. (2019). + +AMAZON-WILDS: Sentiment classification across different users. The task is to classify the star ratings of Amazon reviews. We seek to perform consistently well across new reviewers. While the + +labels (star ratings) are always available for Amazon reviews in practice, unlabeled data is a common source of leverage for sentiment classification more generally, with prior work in domain adaptation (Blitzer & Pereira, 2007; Glorot et al., 2011) and semi-supervised learning (Dasgupta & Ng, 2009; Li et al., 2011). We provide unlabeled reviews from test and extra reviewers, which expands the total number of reviews by $7.5 \times$ . Both the labeled and unlabeled data are adapted from the Amazon review dataset by Ni et al. (2019). + +# 5 ALGORITHMS + +For our evaluation, we selected representative methods from the three categories described below. These methods exemplify current approaches to using unlabeled data to improve robustness, and they have been successful on popular domain adaptation benchmarks like DomainNet (Peng et al., 2019) and semi-supervised settings like improving ImageNet accuracy by leveraging unlabeled images from the internet (Xie et al., 2020; Caron et al., 2020). For more details, see Appendix B. + +Domain-invariant methods. Domain-invariant methods learn feature representations that are invariant across different domains by penalizing differences between learned source and target representations (Long et al., 2015; Ganin et al., 2016; Sun & Saenko, 2016; Long et al., 2017; 2018; Saito et al., 2018; Zhang et al., 2018; Xu et al., 2019; Zhang et al., 2019b). We discuss these methods further in Appendix B.2. For our experiments, we evaluate two classical methods: + +- Domain-Adversarial Neural Networks (DANN) (Ganin et al., 2016) penalize representations on which an auxiliary classifier can easily discriminate between source and target examples. +- Correlation Alignment (CORAL) (Sun et al., 2016; Sun & Saenko, 2016) penalizes differences between the means and covariances of the source and target feature distributions. + +Self-training. Self-training methods "pseudo-label" unlabeled examples with the model's own predictions and then train on them as if they were labeled examples. These methods often also use consistency regularization, which encourages the model to make consistent predictions on augmented views of unlabeled examples (Sohn et al., 2020; Xie et al., 2020; Berthelot et al., 2021). Self-training methods have recently been successfully applied to unsupervised adaptation (Saito et al., 2017; Berthelot et al., 2021; Zhang et al., 2021). We include three representative algorithms: + +- Pseudo-Label (Lee, 2013) dynamically generates pseudolabels and updates the model each batch. +- FixMatch (Sohn et al., 2020) adds consistency regularization on top of the Pseudo-Label algorithm. Specifically, it generates pseudolabels on a weakly augmented view of the unlabeled data, and then minimizes the loss of the model's prediction on a strongly augmented view. +- Noisy Student (Xie et al., 2020) leverages weak and strong augmentations like FixMatch, but instead of dynamically generating pseudolabels for each batch, it alternates between a few teacher phases, where it generates pseudolabels, and student phases, where it trains to convergence on the (pseudo)labeled data. + +Self-supervision. Self-supervised methods learn useful representations by training on unlabeled data via auxiliary proxy tasks. Common approaches include reconstruction tasks (Vincent et al., 2008; Erhan et al., 2010; Devlin et al., 2019; Gidaris et al., 2018; Lewis et al., 2020), and contrastive learning (He et al., 2020; Chen et al., 2020b; Caron et al., 2020; Radford et al., 2021b), and recent work has shown that self-supervised methods can reduce dependence on spurious correlations and improve performance on domain adaptation tasks (Wang et al., 2021; Tsai et al., 2021; Mishra et al., 2021). We use these self-supervision methods for unsupervised adaptation by first pre-training models on the unlabeled data, and then finetuning them on the labeled source data (Shen et al., 2021). We evaluate popular self-supervised methods for vision and language: + +- SwAV (Caron et al., 2020) is a contrastive learning algorithm that maps representations to a set of clusters and then enforces similarity between cluster assignments. +- Masked language modeling (MLM) (Devlin et al., 2019) randomly masks some of the tokens from input text and trains the model to predict the missing tokens. + +# 6 EXPERIMENTS + +To evaluate how well existing methods can leverage unlabeled data to be robust to in-the-wild distribution shifts, we benchmarked the methods above on all applicable WILDS 2.0 datasets. + +# 6.1 SETUP + +We used the default models, labeled training and test sets, and evaluation metrics from WILDS. + +Unlabeled data. WILDS 2.0 contains multiple types of unlabeled data (from source, extra, validation, and/or target domains). For simplicity, we ran experiments on a single type of unlabeled data for each dataset. Where possible, we used unlabeled target data to allow methods to directly adapt to the target distribution; for IWILDCAM2020-WILDS and CIVILCOMMENTS-WILDS, which do not have unlabeled target data, we used the extra domains instead. All methods use exactly the same sets of labeled and unlabeled training data (except ERM, which does not use unlabeled data). + +Hyperparameters. We tuned each method on each dataset separately using random hyperparameter search. Following WILDS 1.0, we used the labeled out-of-distribution (OOD) validation set to select hyperparameters and for early stopping (Koh et al., 2021). This validation set is drawn from a different distribution than both the training and the OOD test set, so tuning on it does not leak information on the test distribution. We did not use the in-distribution (ID) validation set. For image classification and regression, we used both RandAugment (Cubuk et al., 2020) and Cutout (DeVries & Taylor, 2017) as data augmentation for all methods. We did not use data augmentation for the remaining datasets. For some datasets, we also had ground truth labels for the "unlabeled" data, which we used to run fully-labeled ERM experiments. Overall, we ran $600+$ experiments for 7,000 GPU hours on NVIDIA V100s. See Appendix B for a discussion of which methods were applicable to which datasets; Appendix C for augmentation details; Appendix F for the fully-labeled experiments; Appendix D for further experimental details. + +# 6.2 RESULTS + +Table 2 shows mixed results on WILDS: most methods do not improve over standard empirical risk minimization (ERM) despite access to unlabeled data and careful hyperparameter tuning. In contrast, these methods have been shown to perform well on prior unsupervised adaptation benchmarks; in Appendix E, we verify our implementations by showing that these methods (with the exception of CORAL) outperform ERM on the real $\rightarrow$ sketch shift in DomainNet, a standard unsupervised adaptation benchmark for object classification (Peng et al., 2019). + +Image classification (IWILDCAM2020-WILDS, CAMELYON17-WILDS, and FMOW-WILDS). Data augmentation improved OOD performance on all three image classification datasets. The gain was the most substantial on CAMELYON17-WILDS, where vanilla ERM achieved $70.8\%$ accuracy, while ERM with data augmentation achieved $82.0\%$ accuracy. $^{2}$ + +On CAMELYON17-WILDS and FMOW-WILDS, where we had access to unlabeled target data, Noisy Student and SwAV pre-training consistently improved OOD performance and reduced variability across replicates. However, the other methods—CORAL, DANN, Pseudo-Label, and FixMatch—underperformed ERM. This was especially surprising for FixMatch, which performed very well on DomainNet (Appendix E). Both FixMatch and Noisy Student use pseudo-labeling and consistency regularization, but FixMatch dynamically computes pseudo-labels in each batch from the start of training, whereas Noisy Student first trains a teacher model to convergence on the labeled data and updates pseudolabels at a much slower rate. As in Xie et al. (2020), this suggests that dynamically updating pseudo-labels might hurt generalization. + +On IWILDCAM2020-wILDS, where we had access to $4 \times$ as many unlabeled images from extra domains (distinct camera traps) but not to any images from the target domains, none of the benchmarked methods improved OOD performance compared to ERM. This was surprising, as many of these methods were originally shown to work in semi-supervised settings. One difference could be that the labeled and unlabeled examples in IWILDCAM2020-wILDS differ more significantly (as + +
IWILDCAM2020-WILDS +(Unlabeled extra, macro F1)FMOW-WILDS +(Unlabeled target, worst-region acc)
In-distributionOut-of-distributionIn-distributionOut-of-distribution
ERM (-data aug)46.7 (0.6)30.6 (1.1)59.3 (0.7)33.7 (1.5)
ERM47.0 (1.4)32.2 (1.2)60.6 (0.6)34.8 (1.5)
CORAL40.5 (1.4)27.9 (0.4)58.9 (0.3)34.1 (0.6)
DANN48.5 (2.8)31.9 (1.4)57.9 (0.8)34.6 (1.7)
Pseudo-Label47.3 (0.4)30.3 (0.4)60.9 (0.5)33.7 (0.2)
FixMatch46.3 (0.5)31.0 (1.3)58.6 (2.4)32.1 (2.0)
Noisy Student47.5 (0.9)32.1 (0.7)61.3 (0.4)37.8 (0.6)
SwAV47.3 (1.4)29.0 (2.0)61.8 (1.0)36.3 (1.0)
ERM (fully-labeled)54.6 (1.5)44.0 (2.3)65.4 (0.4)58.7 (1.4)
CAMELYON17-WILDS +(Unlabeled target, avg acc)POVERTYMAP-WILDS +(Unlabeled target, worst U/R corr)
In-distributionOut-of-distributionIn-distributionOut-of-distribution
ERM (-data aug)85.8 (1.9)70.8 (7.2)0.65 (0.03)0.50 (0.07)
ERM90.6 (1.2)82.0 (7.4)0.66 (0.04)0.49 (0.06)
CORAL90.4 (0.9)77.9 (6.6)0.54 (0.10)0.36 (0.08)
DANN86.9 (2.2)68.4 (9.2)0.50 (0.07)0.33 (0.10)
Pseudo-Label91.3 (1.3)67.7 (8.2)--
FixMatch91.3 (1.1)71.0 (4.9)0.54 (0.11)0.30 (0.11)
Noisy Student93.2 (0.5)86.7 (1.7)0.61 (0.07)0.42 (0.11)
SwAV92.3 (0.4)91.4 (2.0)0.60 (0.13)0.45 (0.05)
GLOBALWHEAT-WILDS +(Unlabeled target, avg domain acc)OGB-MOLPCBA +(Unlabeled target, avg AP)
In-distributionOut-of-distributionIn-distributionOut-of-distribution
ERM77.8 (0.1)50.5 (1.7)-28.3 (0.1)
CORAL---26.6 (0.2)
DANN---20.4 (0.8)
Pseudo-Label75.2 (1.2)42.7 (4.8)-19.7 (0.1)
Noisy Student78.8 (0.5)49.3 (3.7)-27.5 (0.1)
CIVIL COMMENTS-WILDS +(Unlabeled extra, worst-group acc)AMAZON-WILDS +(Unlabeled target, 10th percentile acc)
In-distributionOut-of-distributionIn-distributionOut-of-distribution
ERM89.8 (0.8)66.6 (1.6)72.0 (0.1)54.2 (0.8)
CORAL--71.7 (0.1)53.3 (0.0)
DANN--71.7 (0.1)53.3 (0.0)
Pseudo-Label90.3 (0.5)66.9 (2.6)71.6 (0.1)52.3 (1.1)
Masked LM89.4 (1.2)65.7 (2.3)71.9 (0.4)53.9 (0.7)
ERM (fully-labeled)89.9 (0.1)69.4 (0.6)73.6 (0.1)56.4 (0.8)
+ +Table 2: The in-distribution (ID) and out-of-distribution (OOD) performance of each method on each applicable dataset. Following WILDS 1.0, we ran 3-10 replicates (random seeds) for each cell, depending on the dataset. We report the standard deviation across replicates in parentheses; the standard error (of the mean) is lower by the square root of the number of replicates. Fully-labeled experiments use ground truth labels on the "unlabeled" data. We bold the highest non-fully-labeled OOD performance numbers as well as others where the standard error is within range. Below each dataset name, we report the type of unlabeled data and metric used. + +they originate from different camera traps) than in the original FixMatch paper (Sohn et al., 2020), which used i.i.d. labeled and unlabeled data, or the Noisy Student paper (Xie et al., 2020), which used ImageNet labeled data (Russakovsky et al., 2015) and JFT unlabeled data (Hinton et al., 2015). + +Fully-labeled ERM models that used ground truth labels for the "unlabeled" data were available for FMOW-WILDS and IWILDCAM2020-WILDS. They significantly outperformed other methods, suggesting room for improvement in how we leverage the unlabeled data. + +Image regression (POVERTYMAP-WILDS). Data augmentation had no effect on performance on POVERTYMAP-WILDS, which differs from the above image datasets in that it is a regression task and involves multi-spectral satellite images (with 7 channels); both of these aspects are relatively unstudied compared to standard RGB image classification. All applicable methods underperformed standard ERM, despite having access to unlabeled data from the target domains (countries). + +Image detection (GLOBALWHEAT-WILDS). We did not apply data augmentation here, as standard augmentation changes the labels (e.g., cropping the image might remove bounding boxes) and would violate the assumption that labels are invariant under augmentations, which contrastive and consistency regularization methods like SwAV, Noisy Student, and FixMatch rely on. Accordingly, we did not evaluate FixMatch and SwAV, and we modified Noisy Student to remove data augmentation noise. All applicable methods underperformed ERM. + +Molecule classification (OGB-MOLPCBA). We did not apply data augmentation techniques to OGB-MOLPCBA as they are not well-developed for molecular graphs. All methods underperformed ERM. We did not report ID results as this dataset has no separate ID test set. + +Text classification (CIVIL COMMENTS-WILDS, AMAZON-WILDS). Likewise, we did not apply data augmentation to the text datasets. On both datasets, other methods performed similarly to ERM (with class-balancing for CIVIL COMMENTS-WILDS). Continued masked LM pre-training on the unlabeled data did not improve target performance, unlike in prior work (Gururanan et al., 2020); this might be because the BERT pre-training corpus (Devlin et al., 2019; Hendrycks et al., 2020) is more similar to the online comments in CIVIL COMMENTS-WILDS and product reviews in AMAZON-WILDS than to the biomedical/CS text studied in Gururanan et al. (2020). Also, CIVIL COMMENTS-WILDS and AMAZON-WILDS measure subpopulation performance (on minority demographics and on the tail subpopulation, respectively), whereas prior work adapted models to new areas of the input space (e.g., from news to biomedical articles). Fully-labeled ERM models showed modest gains compared to FMOW-WILDS and IWILDCAM2020-WILDS. As our evaluations on these text datasets focus on subpopulations performance, these results are consistent with prior observations that ERM models can have poor subpopulation performance even with large labeled training sets (Sagawa et al., 2020), necessitating other approaches to subpopulation shifts. + +# 7 DISCUSSION + +We conclude by discussing several takeaways and promising directions for future work. + +The role of data augmentation. Many unsupervised adaptation methods rely strongly on data augmentation for consistency regularization or contrastive learning. This reliance on data augmentation techniques—which are largely image-specific—restricts their generality, as they do not readily generalize to other modalities (or even other types of images besides photos). Developing data augmentation techniques that can work well in other applications and modalities could be crucial for expanding the applicability of these methods (Verma et al., 2021). + +Hyperparameter tuning. Unsupervised adaptation methods have even more hyperparameters than standard supervised methods, and consistent with prior work, we found that these hyperparameters can significantly affect OOD performance (Saito et al., 2021). Moreover, unlike in standard i.i.d. settings, we do not have labeled target data that we can use for hyperparameter selection. Improved methods for hyperparameter tuning could significantly improve OOD performance. Such methods might make use of the unlabeled target data, or even the combination of labeled and unlabeled OOD validation data, which is provided for most datasets in WILDS 2.0. + +Pre-training on broader unlabeled data. Pre-training on huge amounts of unlabeled data improves robustness to distribution shifts in some settings (Bommasani et al., 2021). The unlabeled data need not be related to the task: e.g., CLIP was pre-trained on text-image pairs from the internet but tested on tasks including histopathology and satellite image classification (Radford et al., 2021a). Existing techniques for this type of broad pre-training appear insufficient for WILDS: many of our models were initialized with ImageNet-pretrained weights or derivatives of BERT, but do not generalize well OOD. While we focused on providing curated unlabeled data that is closely tailored to the task, it could be fruitful to use both broad and curated unlabeled data. + +Leveraging domain annotations and task-specific structure. OOD robustness is ill-posed in general, as models cannot be robust to arbitrary distribution shifts. Beyond unlabeled data, WILDS also has domain annotations and other structured metadata for both labeled and unlabeled data (e.g., in IWILDCAM2020-wILDS, we know which images were taken from which cameras). Exploiting this type of fine-grained domain structure for unsupervised adaptation—e.g., through multi-source/multi-target domain adaptation methods (Zhao et al., 2018; Peng et al., 2019)—could be a promising avenue for learning models that are more robust to the domain shifts in WILDS. + +# ETHICS STATEMENT + +All WILDS datasets are curated and adapted from public data sources, with licenses that allow for public release. The datasets are all anonymized. + +The distribution shifts in several of the WILDS datasets deal with issues of discrimination and bias that arise in real-world applications. For example, CIVIL COMMENTS-WILDS studies disparate model performance across online comments that mention different demographic groups, while FMOW-WILDS and POVERTYMAP-WILDS study countries and regions where labeled satellite data is less readily available. As our results suggest, standard models trained on these datasets will not perform well on those subpopulations, and their learned representations might also be biased in undesirable ways (Bolukbasi et al., 2016; Caliskan et al., 2017; Garg et al., 2018; Tan & Celis, 2019; Steed & Caliskan, 2021). We also encourage caution in interpreting positive results on these datasets, as our evaluation metrics might not encompass all relevant facets of discrimination and bias: e.g., the "ground truth" toxicity annotations in CIVIL COMMENTS-WILDS can themselves be biased, and the particular choice of regions in FMOW-WILDS might obscure lower model performance in sub-regions. + +For FMOW-WILDS and POVERTYMAP-WILDS, surveillance and privacy issues also need to be considered. In FMOW-WILDS, the image resolution is lower than that of other public satellite data (e.g., from Google Maps), and in POVERTYMAP-WILDS, the location metadata is noised to protect privacy. For a deeper discussion of the ethics of remote sensing in the context of humanitarian aid and development, we refer readers to the UNICEF report by Berman et al. (2018). + +# REPRODUCIBILITY STATEMENT + +All WILDS datasets are publicly available at https://wilds.stanford.edu, together with code and scripts to replicate all of the experiments in this paper. We also provide all trained model checkpoints and results, together with the exact hyperparameters used. + +In our appendices, we provide more details on the datasets and experiments: + +- In Appendix A, we describe each of the updated datasets in WILDS 2.0 and their sources of unlabeled data as well as what data processing steps were taken. +- In Appendix B, we describe the implementations of each of our benchmarked methods in detail. In particular, we discuss any changes we made to their original implementations, either for consistency with other methods or with prior implementations of these methods. +- In Appendix C, we describe details of the data augmentations (if any) that we used across each dataset. +- In Appendix D, we describe our experimental protocol, including the hyperparameter selection procedure and hyperparameter grids for all of the methods and datasets. +- In Appendix E, we describe the details of our experiments on DomainNet. +- In Appendix F, we describe the details of our fully-labeled ERM experiments. +- Finally, in Appendix G, we include an illustrative code snippet of how to use the data loaders in the WILDS library. + +# AUTHOR CONTRIBUTIONS + +The project was initiated by Shiori Sagawa, Pang Wei Koh, and Percy Liang. Shiori Sagawa and Pang Wei Koh led the project and coordinated the activities below. Tony Lee developed the experimental infrastructure and ran the experiments. Tony Lee, Irena Gao, Sang Michael Xie, Kendrick Shen, Ananya Kumar, and Michihiro Yasunaga designed the evaluation framework and implemented the algorithms. The unlabeled data loaders and corresponding dataset writeups were added by: + +- AMAZON-WILDS: Tony Lee +CAMELYON17-WILDS:Tony Lee + +CIVILCOMMENTS-WILDS: Irena Gao +- FMOW-WILDS: Sang Michael Xie +- 1WILDCAM2020-WILDS: Henrik Marklund and Sara Beery +- OGB-MOLPCBA: Weihua Hu +- POVERTYMAP-WILDS: Sang Michael Xie +- GLOBALWHEAT-WILDS: Etienne David, Ian Stavness, and Wei Guo. + +Tony Lee and Henrik Marklund set up the website and leaderboards. Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn and Percy Liang provided advice on the overall project direction and experimental design and analysis throughout. Shiori Sagawa, Pang Wei Koh, and Irena Gao drafted the paper; all authors contributed towards writing the final paper. + +# ACKNOWLEDGEMENTS + +We would like to thank Ashwin Ramaswami, Berton Earnshaw, Bowen Liu, Hongseok Namkoong, Junguang Jiang, Ludwig Schmidt, Robbie Jones, Robin Jia, Ruijia Xu, and Yabin Zhang for their helpful advice. + +The design of the WILDS benchmark was inspired by the Open Graph Benchmark (Hu et al., 2020b), and we are grateful to the Open Graph Benchmark team for their advice and help in setting up our benchmark. + +This project was funded by an Open Philanthropy Project Award and NSF Award Grant No. 1805310. Shiori Sagawa was supported by the Herbert Kunzel Stanford Graduate Fellowship and the Apple Scholars in AI/ML PhD fellowship. Sang Michael Xie was supported by a NDSEG Graduate Fellowship. Ananya Kumar was supported by the Rambus Corporation Stanford Graduate Fellowship. Weihua Hu was supported by the Funai Overseas Scholarship and the Masason Foundation Fellowship. Michihiro Yasunaga was supported by the Microsoft Research PhD Fellowship. Henrik Marklund was supported by the Dr. Tech. Marcus Wallenberg Foundation for Education in International Industrial Entrepreneurship, CIFAR, and Google. Sara Beery was supported by an NSF Graduate Research Fellowship and is a PIMCO Fellow in Data Science. Jure Leskovec is a Chan Zuckerberg Biohub investigator. Chelsea Finn is a CIFAR Fellow in the Learning in Machines and Brains Program. + +We also gratefully acknowledge the support of DARPA under Nos. N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-1-0171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), IIS-2030477 (RAPID); Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Chan Zuckerberg Biohub, Amazon, JPMorgan Chase, Docomo, Hitachi, JD.com, KDDI, NVIDIA, Dell, Toshiba, and UnitedHealth Group. + +# REFERENCES + +Abubakar Abid, Maheen Farooqi, and James Zou. Persistent anti-muslim bias in large language models. arXiv preprint arXiv:2101.05783, 2021. +Jorge A Ahumada, Eric Fegraus, Tanya Birch, Nicole Flores, Roland Kays, Timothy G OBrien, Jonathan Palmer, Stephanie Schuttler, Jennifer Y Zhao, Walter Jetz, et al. Wildlife insights: A platform to maximize the potential of camera trap and other passive sensor wildlife data for the planet. Environmental Conservation, 47(1):1-6, 2020. +Saad Ullah Akram, Talha Qaiser, Simon Graham, Juho Kannala, Janne Heikkilä, and Nasir Rajpoot. Leveraging unlabeled whole-slide-images for mitosis detection. Computational Pathology and Ophthalmic Medical Image Analysis, 1:69-77, 2018. +EA AlBadawy, A Saha, and MA Mazurowski. Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing. Med Phys., 45, 2018. + +Shekoofeh Azizi, Basil Mustafa, Fiona Ryan, Zachary Beaver, Jan Freyberg, Jonathan Deaton, Aaron Loh, Alan Karthikesalingam, Simon Kornblith, Ting Chen, et al. Big self-supervised models advance medical image classification. arXiv preprint arXiv:2101.05224, 2021. +Peter Bandi, Oscar Geessink, Quirine Manson, Marcory Van Dijk, Maschenka Balkenhol, Meyke Hermsen, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, et al. From detection of individual metastases to classification of lymph node status at the patient level: the CAMELYON17 challenge. IEEE Transactions on Medical Imaging, 38(2):550-560, 2018. +Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In European Conference on Computer Vision (ECCV), pp. 456-473, 2018. +Sara Beery, Dan Morris, and Siyu Yang. Efficient pipeline for camera trap image review. arXiv preprint arXiv:1907.06772, 2019. +Sara Beery, Elijah Cole, and Arvi Gjoka. The iwildcam 2020 competition dataset. arXiv preprint arXiv:2004.10340, 2020. +Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine Learning, 79(1):151-175, 2010. +Gabrielle Berman, Sara de la Rosa, and Tanya Accone. Ethical considerations when using geospatial technologies for evidence generation. *Innocenti Discussion Paper*, UNICEF Office of Research, 2018. +David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, and Alex Kurakin. Adamatch: A unified approach to semi-supervised learning and domain adaptation. arXiv preprint arXiv:2106.04732, 2021. +Lukas Biewald. Experiment tracking with weights and biases, 2020. URL https://www.wandb.com/. Software available from wandb.com. +John Blitzer and Fernando Pereira. Domain adaptation of natural language processing systems. University of Pennsylvania, 2007. +John Blitzer, Mark Dredze, and Fernando Pereira. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th annual meeting of the association of computational linguistics, pp. 440-447, 2007. +Evan E Bolton, Yanli Wang, Paul A Thiessen, and Stephen H Bryant. Pubchem: integrated platform of small molecules and biological activities. In Annual reports in computational chemistry, volume 4, pp. 217-241. Elsevier, 2008. +Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in Neural Information Processing Systems (NeurIPS), pp. 4349-4357, 2016. +Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dorottya Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher R, Dorsa + +Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. +Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. Nuanced metrics for measuring unintended bias with real data for text classification. In World Wide Web (WWW), pp. 491-500, 2019. +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. +Maxwell Burnette, Rob Kooper, J. D. Maloney, Gareth S. Rohde, Jeffrey A. Terstriep, Craig Willis, Noah Fahlgren, Todd Mockler, Maria Newcomb, Vasit Sagan, Pedro Andrade-Sanchez, Nadia Shakoor, Paheding Sidike, Rick Ward, and David LeBauer. Terra-ref data processing infrastructure. In Proceedings of the Practice and Experience on Advanced Research Computing, PEARC '18, New York, NY, USA, 2018. Association for Computing Machinery. ISBN 9781450364461. +Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186, 2017. +Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In Advances in Neural Information Processing Systems (NeurIPS), volume 33, pp. 9912-9924, 2020. +Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutopoulos. LEGAL-BERT:"preparing the muppets for court". In Empirical Methods in Natural Language Processing (EMNLP), pp. 2898-2904, 2020. +Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning (ICML), pp. 1597-1607, 2020a. +Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597-1607. PMLR, 2020b. +Gordon Christie, Neil Fendley, James Wilson, and Ryan Mukherjee. Functional map of the world. In Computer Vision and Pattern Recognition (CVPR), 2018. +Ozan Ciga, Anne L Martel, and Tony Xu. Self supervised contrastive learning for digital histopathology. arXiv preprint arXiv:2011.13971, 2020. +Jonathan H Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. Tydi qa: A benchmark for information-seeking question answering in typologically diverse languages. arXiv preprint arXiv:2003.05002, 2020. +Elijah Cole, Xuan Yang, Kimberly Wilber, Oisin Mac Aodha, and Serge Belongie. When does contrastive visual representation learning work? arXiv preprint arXiv:2105.05837, 2021. +Alexis Conneau and Guillaume Lample. Cross-lingual language model pretraining. In Advances in Neural Information Processing Systems (NeurIPS), pp. 7059–7069, 2019. +Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. Xnli: Evaluating cross-lingual sentence representations. In Empirical Methods in Natural Language Processing (EMNLP), pp. 2475–2485, 2018. + +Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213-3223, 2016. +Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Computer Vision and Pattern Recognition (CVPR), pp. 702-703, 2020. +Sajib Dasgupta and Vincent Ng. Mine the easy, classify the hard: a semi-supervised approach to automatic sentiment classification. In Conference on Natural Language Processing (KONVENS), pp. 701-709, 2009. +Etienne David, Simon Madec, Pouria Sadeghi-Tehran, Helge Aasen, Bangyou Zheng, Shouyang Liu, Norbert Kirchgessner, Goro Ishikawa, Koichi Nagasawa, Minhajul A Badhon, Curtis Pozniak, Benoit de Solan, Andreas Hund, Scott C. Chapman, Frederic Baret, Ian Stavness, and Wei Guo. Global wheat head detection (gwhd) dataset: a large and diverse dataset of high-resolution rgb-labelled images to develop and benchmark wheat head detection methods. Plant Phenomics, 2020, 2020. +Etienne David, Mario Serouart, Daniel Smith, Simon Madec, Kaaviya Velumani, Shouyang Liu, Xu Wang, Francisco Pinto, Shahameh Shafiee, Izzat S. A. Tahir, Hisashi Tsujimoto, Shuhei Nasuda, Bangyou Zheng, Norbert Kirchgessner, Helge Aasen, Andreas Hund, Pouria Sadegi-Tehran, Koichi Nagasawa, Goro Ishikawa, Sébastien Dandrifosse, Alexis Carlier, Benjamin Dumont, Benoit Mercatoris, Byron Evers, Ken Kuroki, Haozhou Wang, Masanori Ishii, Minhajul A. Badhon, Curtis Pozniak, David Shaner LeBauer, Morten Lillemo, Jesse Poland, Scott Chapman, Benoit de Solan, Frédéric Baret, Ian Stavness, and Wei Guo. Global wheat head detection 2021: An improved dataset for benchmarking wheat head detection methods. Plant Phenomics, 2021, 2021. +Olivier Dehaene, Axel Camara, Olivier Moindrot, Axel de Lavergne, and Pierre Courtiol. Self-supervision closes the gap between weak and strong supervision in histology. arXiv preprint arXiv:2012.03583, 2020. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Association for Computational Linguistics (ACL), pp. 4171-4186, 2019. +Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. +Josip Djolonga, Jessica Yung, Michael Tschannen, Rob Romijnders, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Matthias Minderer, Alexander D'Amour, Dan Moldovan, et al. On robustness and transferability of convolutional neural networks. arXiv preprint arXiv:2007.08558, 2020. +Abhimanyu Dubey, Vignesh Ramanathan, Alex Pentland, and Dhruv Mahajan. Adaptive methods for real-world domain generalization. In Computer Vision and Pattern Recognition (CVPR), 2021. +Dumitru Erhan, Aaron Courville, Yoshua Bengio, and Pascal Vincent. Why does unsupervised pretraining help deep learning? In Artificial Intelligence and Statistics (AISTATS), pp. 201-208, 2010. +Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Francois Laviolette, Mario March, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research (JMLR), 17, 2016. +Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. Word embeddings quantify 100 years of gender and ethnic stereotypes. Science, 115, 2018. +Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. arXiv preprint arXiv:2009.11462, 2020. + +Robert Geirhos, Jorn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2020. +Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations (ICLR), 2018. +Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Artificial Intelligence and Statistics (AISTATS), pp. 315-323, 2011. +Yves Grandvalet and Yoshua Bengio. Entropy regularization. In Semi-Supervised Learning, 2005. +Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. Domain-specific language model pretraining for biomedical natural language processing. arXiv preprint arXiv:2007.15779, 2020. +Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. arXiv preprint arXiv:2007.01434, 2020. +Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. Don't stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964, 2020. +Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. Realm: Retrievalaugmented language model pre-training. In icml, 2020. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition (CVPR), 2016. +Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Computer Vision and Pattern Recognition (CVPR), 2020. +Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. Pretrained transformers improve out-of-distribution robustness. arXiv preprint arXiv:2004.06100, 2020. +Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop, 2015. +Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A. Efros, and Trevor Darrell. Cycada: Cycle consistent adversarial domain adaptation. In International Conference on Machine Learning (ICML), 2018. +Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization. arXiv preprint arXiv:2003.11080, 2020a. +Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open Graph Benchmark: Datasets for machine learning on graphs. In Advances in Neural Information Processing Systems (NeurIPS), 2020b. +Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. In International Conference on Learning Representations (ICLR), 2020c. +Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700-4708, 2017. +Jonathan J. Hull. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence, 16(5):550-554, 1994. + +Neal Jean, Marshall Burke, Michael Xie, W. Matthew Davis, David B. Lobell, and Stefano Ermon. Combining satellite imagery and machine learning to predict poverty. Science, 353, 2016. +Neal Jean, Sang Michael Xie, and Stefano Ermon. Semi-supervised deep kernel learning: Regression with unlabeled data by minimizing predictive variance. In Advances in Neural Information Processing Systems (NeurIPS), 2018. +Junguang Jiang, Baixu Chen, Bo Fu, and Mingsheng Long. Transfer learning library. https://github.com/thuml/Transfer-Learning-Library, 2020. +Amita Kamath, Robin Jia, and Percy Liang. Selective question answering under domain shift. In Association for Computational Linguistics (ACL), 2020. +Benjamin Kellenberger, Diego Marcos, Sylvain Lobry, and Devis Tuia. Half a percent of labels is enough: Efficient animal detection in uav imagery using deep cnns and active learning. IEEE Transactions on Geoscience and Remote Sensing, 57(12):9524-9533, 2019. +Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton A. Earnshaw, Imran S. Haque, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. WILDS: A benchmark of in-the-wild distribution shifts. In International Conference on Machine Learning (ICML), 2021. +Daisuke Komura and Shumpei Ishikawa. Machine learning methods for histopathological image analysis. Computational and structural biotechnology journal, 16:34-42, 2018. +Navid Alemi Koohbanani, Balagopal Unnikrishnan, Syed Ali Khurram, Pavitra Krishnaswamy, and Nasir Rajpoot. Self-path: Self-supervision for classification of pathology images with limited annotations. IEEE Transactions on Medical Imaging, 1, 2021. +Greg Landrum et al. Rdkit: Open-source cheminformatics, 2006. +Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. +Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In ICML Workshop on Challenges in Representation Learning, 2013. +Jason D Lee, Qi Lei, Nikunj Saunshi, and Jiacheng Zhuo. Predicting what you already know helps: Provable self-supervised learning. arXiv preprint arXiv:2008.01064, 2020a. +Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240, 2020b. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Association for Computational Linguistics (ACL), 2020. +Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In Proceedings of the IEEE international conference on computer vision, pp. 5542-5550, 2017. +Shoushan Li, Zhongqing Wang, Guodong Zhou, and Sophia Yat Mei Lee. Semi-supervised learning for imbalanced sentiment classification. In International Joint Conference on Artificial Intelligence (IJCAI), 2011. +Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International conference on machine learning, pp. 97-105, 2015. +Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In International conference on machine learning, pp. 2208-2217, 2017. + +Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In Advances in Neural Information Processing Systems (NeurIPS), 2018. +Ming Y Lu, Richard J Chen, Jingwen Wang, Debora Dillon, and Faisal Mahmood. Semi-supervised histology classification using deep multiple instance learning and contrastive predictive coding. arXiv preprint arXiv:1910.10825, 2019. +Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation with multiple sources. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1041-1048, 2009. +John Miller, Karl Krauth, Benjamin Recht, and Ludwig Schmidt. The effect of natural distribution shift on question answering models. arXiv preprint arXiv:2004.14444, 2020. +John Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, and Ludwig Schmidt. Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization. In International Conference on Machine Learning (ICML), 2021. +Samarth Mishra, Kate Saenko, and Venkatesh Saligrama. Surprisingly simple semi-supervised domain adaptation with pretraining and consistency. arXiv preprint arXiv:2101.12727, 2021. +Moin Nadeem, Anna Bethke, and Siva Reddy. Stereoset: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456, 2020. +Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Tajudeen Kolawole, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsudee Hassan Muhammad, Salomon Kabongo, Salomey Osei, Sackey Freshia, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Orevaoghene Ahia, Musie Meressa, Mofe Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Jane Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kathleen Siminyu, Julia Kreutzer, Jason Webster, Jamil Toure Ali, Jade Abbott, Iroro Orife, Ignatius Ezeani, Idris Abdulkabir Dangana, Herman Kamper, Hady Elsahar, Goodness Duru, Ghollah Kioko, Espoir Murhabazi, Elan van Biljon, Daniel Whitenack, Christopher Onyefuluchi, Chris Emezue, Bonaventure Dossou, Blessing Sibanda, Blessing Itoro Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp ktem, Adewale Akinfaderin, and Abdallah Bashir. Participatory research for low-resourced machine translation: A case study in African languages. In Findings of Empirical Methods in Natural Language Processing (Findings of EMNLP), 2020. +Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. +Jianmo Ni, Jiacheng Li, and Julian McAuley. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Empirical Methods in Natural Language Processing (EMNLP), pp. 188-197, 2019. +Mohammad Sadegh Norouzzadeh, Dan Morris, Sara Beery, Neel Joshi, Nebojsa Jojic, and Jeff Clune. A deep active learning system for species identification and counting in camera trap images. Methods in Ecology and Evolution, 12(1):150-161, 2021. +Jill Nugent. inaturalist. Science Scope, 41(7):12-13, 2018. +Yonatan Oren, Shiori Sagawa, Tatsunori Hashimoto, and Percy Liang. Distributionally robust language modeling. In Empirical Methods in Natural Language Processing (EMNLP), 2019. +Omiros Pantazis, Gabriel Brostow, Kate Jones, and Oisin Mac Aodha. Focus on the positives: Self-supervised learning for biodiversity monitoring. arXiv preprint arXiv:2108.06435, 2021. +Mohammad Peikari, Sherine Salama, Sharon Nofech-Mozes, and Anne L Martel. A cluster-then-label semi-supervised learning approach for pathology image classification. Scientific reports, 8 (1):1-13, 2018. +Xingchao Peng, Ben Usman, Neela Kaushik, Dequan Wang, Judy Hoffman, and Kate Saenko. Visda: A synthetic-to-real benchmark for visual domain adaptation. In Computer Vision and Pattern Recognition (CVPR), pp. 2021-2026, 2018. + +Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In International Conference on Computer Vision (ICCV), 2019. +Joaquin Quinonero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D. Lawrence. Dataset shift in machine learning. The MIT Press, 2009. +Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML), volume 139, pp. 8748-8763, 2021a. +Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021b. +Colorado J. Reed, Xiangyu Yue, Ani Nrusimha, Sayna Ebrahimi, Vivek Vijaykumar, Richard Mao, Bo Li, Shanghang Zhang, Devin Guillory, Sean Metzger, Kurt Keutzer, and Trevor Darrell. Self-supervised pretraining improves self-supervised pretraining. arXiv, 2021. +Jian Ren, Ilker Hacihaliloglu, Eric A Singer, David J Foran, and Xin Qi. Adversarial domain adaptation for classification of prostate histopathology whole-slide images. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 201-209, 2018. +Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 39:1137-1149, 2015. +Stephan R Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In European conference on computer vision, pp. 102-118, 2016. +Tim Robertson, Markus Döring, Robert Guralnick, David Bloom, John Wieczorek, Kyle Braak, Javier Otegui, Laura Russell, and Peter Desmet. The gbif integrated publishing toolkit: facilitating the efficient publishing of biodiversity data on the internet. *PloS one*, 9(8):e102623, 2014. +Alexander Robey, George J Pappas, and Hamed Hassani. Model-based domain generalization. arXiv preprint arXiv:2102.11436, 2021. +David Rogers and Mathew Hahn. Extended-connectivity fingerprints. Journal of Chemical Information and Modeling, 50(5):742-754, 2010. doi: 10.1021/ci100050t. +Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing Huang, and Junzhou Huang. Self-supervised graph transformer on large-scale molecular data. arXiv preprint arXiv:2007.02835, 2020. +German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio M Lopez. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3234-3243, 2016. +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211-252, 2015. +Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European conference on computer vision, pp. 213-226, 2010. +Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. In International Conference on Learning Representations (ICLR), 2020. +Kuniaki Saito, Yoshitaka Ushiku, and Tatsuya Harada. Asymmetric tri-training for unsupervised domain adaptation. In International Conference on Machine Learning (ICML), pp. 2988-2997, 2017. + +Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3723-3732, 2018. +Kuniaki Saito, Donghyun Kim, Piotr Teterwak, Stan Sclaroff, Trevor Darrell, and Kate Saenko. Tune it the right way: Unsupervised validation of domain adaptation via soft neighborhood density. arXiv preprint arXiv:2108.10860, 2021. +Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019. +Shayne Shaw, Maciej Pajak, Aneta Lisowska, Sotirios A Tsaftaris, and Alison Q O'Neil. Teacher-student chain for efficient semi-supervised histology image classification. arXiv preprint arXiv:2003.08797, 2020. +Kendrick Shen, Robbie Matthew Jones, Ananya Kumar, Sang Michael Xie, and Percy Liang. How does contrastive pre-training connect disparate domains? In NeurIPS Workshop on Distribution Shifts, 2021. +Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. arXiv, 2020. +Ryan Steed and Aylin Caliskan. Image representations learned with unsupervised pre-training contain human-like biases. In ACM Conference on Fairness, Accountability, and Transparency (FAccT), pp. 701-713, 2021. +Jong-Chyi Su and Subhransu Maji. The semi-supervised inaturalist-aves challenge at fgvc7 workshop. arXiv preprint arXiv:2103.06937, 2021. +Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In European Conference on Computer Vision (ECCV), 2016. +Baochen Sun, Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation. In Association for the Advancement of Artificial Intelligence (AAAI), 2016. +Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, and Jian Tang. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In International Conference on Learning Representations (ICLR), 2020. +Yi Chern Tan and L Elisa Celis. Assessing social and intersectional biases in contextualized word representations. arXiv preprint arXiv:1911.01485, 2019. +Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt. Measuring robustness to natural distribution shifts in image classification. arXiv preprint arXiv:2007.00644, 2020. +David Tellez, Geert Litjens, Péter Bandy, Wouter Bulten, John-Melle Bokhorst, Francesco Ciompi, and Jeroen van der Laak. Quantifying the effects of data augmentation and stain color normalization in convolutional neural networks for computational pathology. Medical Image Analysis, 58, 2019. +Yao-Hung Hubert Tsai, Martin Q Ma, Han Zhao, Kun Zhang, Louis-Philippe Morency, and Ruslan Salakhutdinov. Conditional contrastive learning: Removing undesirable information in self-supervised representations. arXiv preprint arXiv:2106.02866, 2021. +Lifu Tu, Garima Lalwani, Spandana Gella, and He He. An empirical study on robustness to spurious correlations using pre-trained language models. Transactions of the Association for Computational Linguistics (TACL), 8:621-633, 2020. +Grant Van Horn, Elijah Cole, Sara Beery, Kimberly Wilber, Serge Belongie, and Oisin Mac Aodha. Benchmarking representation learning for natural world image collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12884-12893, 2021. + +Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Computer Vision and Pattern Recognition (CVPR), pp. 5018-5027, 2017. +Vikas Verma, Thang Luong, Kenji Kawaguchi, Hieu Pham, and Quoc Le. Towards domain-agnostic contrastive learning. In International Conference on Machine Learning (ICML), 2021. +Mitko Veta, Paul J Van Diest, Mehdi Jiwa, Shaimaa Al-Janabi, and Josien PW Pluim. Mitosis counting in breast cancer: Object-level interobserver agreement and comparison to an automatic method. PloS one, 11(8), 2016. +Pascal Vincent, Hugo Larochelle, Yoshua Bengio, , and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In International Conference on Machine Learning (ICML), 2008. +Rui Wang, Zuxuan Wu, Zejia Weng, Jingjing Chen, Guo-Jun Qi, and Yu-Gang Jiang. Cross-domain contrastive learning for unsupervised domain adaptation. arXiv, 2021. +Colin Wei, Sang Michael Xie, and Tengyu Ma. Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning. arXiv preprint arXiv:2106.09226, 2021. +Ben G Weinstein, Sergio Marconi, Stephanie Bohlman, Alina Zare, and Ethan White. Individual tree-crown detection in rgb imagery using semi-supervised deep learning neural networks. Remote Sensing, 11(11):1309, 2019. +Ben G Weinstein, Lindsey Gardner, Vienna Saccomanno, Ashley Steinkraus, Andrew Ortega, Kristen Brush, Glenda Yenni, Ann E McKellar, Rowan Converse, Christopher Lippitt, et al. A general deep learning model for bird detection in high resolution airborne imagery. bioRxiv, 2021. +John N Weinstein, Eric A Collisson, Gordon B Mills, Kenna R Mills Shaw, Brad A Ozenberger, Kyle Ellrott, Ilya Shmulevich, Chris Sander, Joshua M Stuart, Cancer Genome Atlas Research Network, et al. The cancer genome atlas pan-cancer analysis project. Nature genetics, 45(10), 2013. +Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Genisses, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Molecularnet: a benchmark for molecular machine learning. Chemical science, 9(2):513-530, 2018. +Michael Xie, Neal Jean, Marshall Burke, David Lobell, and Stefano Ermon. Transfer learning from deep features for remote sensing and poverty mapping. In Association for the Advancement of Artificial Intelligence (AAAI), 2016. +Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V. Le. Self-training with noisy student improves imagenet classification. arXiv, 2020. +Sang Michael Xie, Ananya Kumar, Robbie Jones, Fereshte Khani, Tengyu Ma, and Percy Liang. In-N-out: Pre-training and self-training using auxiliary information for out-of-distribution robustness. In International Conference on Learning Representations (ICLR), 2021a. +Yaochen Xie, Zhao Xu, Jingtun Zhang, Zhengyang Wang, and Shuiwang Ji. Self-supervised learning of graph neural networks: A unified review. arXiv preprint arXiv:2102.10757, 2021b. +Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations (ICLR), 2018. +Ruijia Xu, Guanbin Li, Jihan Yang, and Liang Lin. Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. In International Conference on Computer Vision (ICCV), pp. 1426-1435, 2019. +Christopher Yeh, Anthony Perez, Anne Driscoll, George Azzari, Zhongyi Tang, David Lobell, Stefano Ermon, and Marshall Burke. Using publicly available satellite imagery and deep learning to understand economic well-being in africa. Nature Communications, 11, 2020. + +Weichen Zhang, Wanli Ouyang, Wen Li, and Dong Xu. Collaborative and adversarial network for unsupervised domain adaptation. In Computer Vision and Pattern Recognition (CVPR), pp. 3801-3809, 2018. +Yabin Zhang, Haojian Zhang, Bin Deng, Shuai Li, Kui Jia, and Lei Zhang. Semi-supervised models are strong unsupervised domain adaptation learners. arXiv preprint arXiv:2106.00417, 2021. +Yifan Zhang, Hanbo Chen, Ying Wei, Peilin Zhao, Jiezhang Cao, Xinjuan Fan, Xiaoying Lou, Hailing Liu, Jinlong Hou, Xiao Han, et al. From whole slide imaging to microscopy: Deep microscopy adaptation network for histopathology cancer image classification. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 360-368, 2019a. +Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. Bridging theory and algorithm for domain adaptation. In International Conference on Machine Learning (ICML), pp. 7404-7413, 2019b. +Han Zhao, Shanghang Zhang, Guanhang Wu, José MF Moura, Joao P Costeira, and Geoffrey J Gordon. Adversarial multiple source domain adaptation. Advances in neural information processing systems, 31:8559-8570, 2018. + +# APPENDICES + +# A Additional dataset details 23 + +A.1 IWILDCAM2020-WILDS 23 +A.2 CAMELYON17-WILDS 24 +A.3 FMOW-WILDS 25 +A.4 POVERTYMAP-WILDS 26 +A.5 GLOBALWHEAT-WILDS 27 +A.6 OGB-MolPCBA 31 +A.7CIVILCOMMENTS-WILDS 32 +A.8Amazon-WILDS 33 + +# B Algorithm details 35 + +B.1 Empirical risk minimization (ERM) 35 +B.2 Domain-invariant methods 35 +B.3 Self-training methods 37 +B.4 Self-supervision methods 41 + +# C Data augmentation 43 + +# D Experimental details 44 + +D.1 In-distribution vs. out-of-distribution performance 44 +D.2 Model architectures 44 +D.3 Batch sizes and batch normalization 44 +D.4 Hyperparameter tuning 45 +D.5 Algorithm-specific hyperparameters 46 +D.6 Compute infrastructure 46 + +# E Experiments on DomainNet 47 + +# F Fully-labeled ERM experimental details 49 + +# G Using the WILDS library with unlabeled data 51 + +# A ADDITIONAL DATASET DETAILS + +In this appendix, we provide additional details on the unlabeled data in WILDS 2.0. For more context on the motivation behind each dataset, the choice of evaluation metric, and the labeled data, please refer to the original WILDS paper (Koh et al., 2021). + +# A.1 IWILDCAM2020-WILDS + +The iWILDCAM2020-wilds dataset was adapted from the iWildCam 2020 competition dataset made up of data provided by the Wildlife Conservation Society (WCS) (Beery et al., 2020) $^3$ . Camera trap images are captured by motion-triggered static cameras placed in the wild to study wildlife in a non-invasive manner. Images are captured at high volumes - a single camera trap can capture 10K images in a month - and annotating these images requires species identification expertise and is time-intensive. However, there are tens of thousands of camera traps worldwide capturing images of wildlife that could be used as unlabeled training data. For example, Wildlife Insights (Ahumada et al., 2020) now contains almost 20M camera trap images collected across the globe, but a large proportion of that data is still unlabeled. Ideally we could capture value from those images despite the lack of available labels. We extend iWILDCAM2020-wilds with unlabeled data from a set of WCS camera traps entirely disjoint with the labeled dataset, representative of unlabeled data from a newly-deployed sensor network. + +Problem setting. The task is to classify the species of animals in camera trap images. The input $x$ is an image from a camera trap, and the domain $d$ corresponds to the camera trap that captured the image. The target $y$ , provided only for the labeled training images, is one of 182 classes of animals. We seek to learn models that generalize well to new camera trap deployments, so the test data comes from domains unseen during training. Additionally, we evaluate the in-distribution performance on held-out images from camera traps in the train set. + +Data. The data comes from multiple camera traps around the world, all provided by the Wildlife Conservation Society (WCS). The labeled data is the same as in Koh et al. (2021) and the unlabeled data comprise 819,120 images from 3215 WCS camera traps not included in iWildCam 2020: + +1. Source: 243 camera traps. +2. Validation (OOD): 32 camera traps. +3. Target (OOD): 48 camera traps. +4. Extra: 3215 camera traps. + +The four sets of camera traps are disjoint. The distributions of the labeled and unlabeled camera traps are very similar, except that the labeled data does not contain cameras with photos taken before LandSat 8 data was available. + +
Split# Domains (camera traps)# Labeled examples# Unlabeled examples
Source243129,8090
Validation (ID)7,3140
Target (ID)8,1540
Validation (OOD)3214,9610
Target (OOD)4842,7910
Extra (OOD)32150819,120
Total3538203,029819,120
+ +Table 3: Data for IWILDCAM2020-wilds. Each domain corresponds to a different camera trap. + +Broader context. There are large volumes of unlabeled natural world data that have been collected in growing repositories such as iNaturalist (Nugent, 2018), Wildlife Insights (Ahumada et al., 2020), and GBIF (Robertson et al., 2014). This data includes images or video collected by remote sensors or community scientists, GPS track data from an-animal devices, aerial data from drones or satellites, underwater sonar, bioacoustics, and eDNA. Methods that can harness the wealth of information in unlabeled ecological data are well-posed to make significant breakthroughs in how we think about ecological and conservation-focused research. Natural-world and ecological benchmarks that provide unlabeled data include NEWT (Van Horn et al., 2021), investigating efficient task learning, and Semi-Supervised iNat (Su & Maji, 2021), which provides labeled data for only a subset of the taxonomic tree. Recent work has begun to adapt weakly-supervised and self-supervised approaches for these natural world settings, including probing the generality and efficacy of self-supervision (Cole et al., 2021), incorporating domain-relevant context into self-supervision (Pantazis et al., 2021), or leveraging weak supervision from alternative data modalities (Weinstein et al., 2019) or pre-trained, generic models (Weinstein et al., 2021; Beery et al., 2019). Active learning also plays a role here in seeking to adapt models efficiently to unlabeled data from novel regions with only a few targeted labels (Kellenberger et al., 2019; Norouzzadeh et al., 2021). + +# A.2 CAMELYON17-WILDS + +The CAMELYON17-WILDS dataset (Koh et al., 2021) was adapted from the Camelyon17 dataset (Bandi et al., 2018), which is a collection of whole-slide images (WSIs) of breast cancer metastases in lymph node sections from 5 hospitals in the Netherlands. The labels were obtained by asking expert pathologists to perform pixel-level annotations of each WSI, which is an expensive and painstaking process. In practice, unlabeled WSIs (i.e., WSIs without pixel-level annotations) are much easier to obtain. For example, only a fraction of the WSIs in the original Camelyon17 dataset (Bandi et al., 2018) were labeled; the other WSIs, which are taken from the same 5 hospitals, were provided without labels. In this work, we augment the CAMELYON17-WILDS dataset with unlabeled data from these WSIs. + +Problem setting. The task is to classify whether a histological image patch contains any tumor tissue. We consider generalizing from a set of training hospitals to new hospitals at test time. The input $x$ corresponds to a $96 \times 96$ image patch extracted from an WSI of a lymph node section, the label $y$ is a binary indicator of whether the central $32 \times 32$ patch of the input contains any pixel that was annotated as a tumor in the WSI, and the domain $d$ identifies which hospital the patch came from. Each patch also includes metadata on which WSI it was extracted from, though we do not use this metadata for training or evaluation. Models are evaluated by their average accuracy on a class-balanced test dataset. + +Data. All of the labeled and unlabeled data are taken from the Camelyon17 dataset (Bandi et al., 2018), which consists of WSIs from 5 hospitals (domains) in the Netherlands. We provide unlabeled data from same domains as the labeled CAMELYON17-WILDS dataset (no extra domains). The domains are split as follows: + +1. Source: Hospitals 1, 2, and 3. +2. Validation (OOD): Hospital 4. +3. Target (OD): Hospital 5. + +CAMELYON17-WILDS also includes a Validation (ID) set which contains data from the training hospitals. + +The CAMELYON17-WILDS dataset has a total of 455,954 labeled patches across these splits, derived from the 10 WSIs per hospital that have full pixel-level annotations. We augment the dataset with a total of 2,999,307 unlabeled patches, extracted from an additional 90 unlabeled WSIs per hospital. There is no overlap between the WSIs used for the labeled versus unlabeled data. To extract and process each patch, we followed the same data processing steps that were carried out for the labeled data in Koh et al. (2021). + +Unlike the labeled patches, which were sampled in a class-balanced manner (i.e., half of the patches have positive labels), we sampled the unlabeled patches uniformly at random from the unlabeled + +
Split# Domains (hospitals)# Labeled examples# Unlabeled examples
Source3302,4361,799,247
Validation (ID)33,5600
Validation (OOD)134,904600,030
Target (OOD)185,054600,030
Total5455,9542,999,307
+ +Table 4: Data for CAMELYON17-WILDS. Each domain corresponds to a different hospital. + +WSIs. We sampled 6,667 patches per unlabeled WSI, with the single exception of one WSI which had only 5,824 valid patches, resulting in a total of 3,000,150 unlabeled patches (Table 4). While the labeled patches were sampled in a class-balanced manner, the underlying label distribution skews heavily negative (approximately $95\%$ of the patches in a WSI are negative), so we expect the unlabeled patches to be similarly skewed in their label distribution. + +Broader context. We focused on providing unlabeled data from the same hospitals (domains) as in the original labeled CAMELYON17-WILDS dataset. This unlabeled data from the training and test hospitals can be used to develop and evaluate methods for semi-supervised learning (Peikari et al., 2018; Akram et al., 2018; Lu et al., 2019; Shaw et al., 2020) and domain adaptation (Ren et al., 2018; Zhang et al., 2019a; Koohbanani et al., 2021), respectively. In practice, there is also a large amount of unlabeled data from different domains that is publicly available: for example, The Cancer Genome Atlas (TCGA) hosts tens of thousands of publicly-available slide images across a variety of cancer types and from many different hospitals (Weinstein et al., 2013). These large and diverse datasets need not even be directly relevant to the task at hand, e.g., one could pretrain a model on images for different types of cancer even if the goal were to develop a model for breast cancer. Recent work has started to explore the use of these large and diverse datasets for computational pathology applications (Ciga et al., 2020; Dehaene et al., 2020) and in other medical imaging applications (Azizi et al., 2021). + +# A.3 FMOW-WILDS + +The FMOW-WILDS dataset (Koh et al., 2021) was adapted from the FMoW dataset (Christie et al., 2018), which consists of global satellite images from 2002-2018, labeled with the functional purpose of the buildings or land in the image. The labels are collected by a process which combines map data with crowdsourced annotations (from a trusted crowd). In contrast, unlabeled satellite imagery is readily available across the globe. In this work, we augment the FMOW-WILDS dataset with unused satellite images that were part of the original FMoW dataset but not in the FMOW-WILDS dataset. + +Problem setting. The task is to classify the building or land-use type of a satellite image. We consider generalizing from images before 2013 to after 2013, as well as considering the performance on the worst-case geographic region (Africa, the Americas, Oceania, Asia, or Europe). The input $x$ is an RGB satellite image ( $224 \times 224$ pixels). The label $y$ is one of 62 building or land use categories. The domain $d$ represents both the year and the geographical region of the image. Each image also includes metadata on the location and time of the image, although we do not use these except for splitting the domains. Models are evaluated by their average and worst-region accuracies in the OOD timeframe. + +Data. The labeled and unlabeled data are taken from the FMoW dataset (Christie et al., 2018). We provide unlabeled data from same domains as the labeled FMOW-wILDS dataset (no extra domains). The domains are as follows: + +1. Source: Images from 2002-2013. +2. Validation (OOD): Images from 2013-2016. + +
Split# Domains (years × region)# Labeled examples# Unlabeled examples
Source11 × 576,86311,948
Validation (ID)11,4830
Target (ID)11,3270
Validation (OOD)3 × 519,915155,313
Target (OOD)2 × 522,108173,208
Total16 × 5141,696340,469
+ +Table 5: Data for FMOW-WILDS. Each domain corresponds to a different year and geographical region. + +# 3. Target (OOD): Images from 2016-2018. + +All of these domains have disjoint locations. FMOW-WILDS also includes Validation (ID) and Target (ID) sets which contain data from the training domains of 2002-2013. + +The FMOw-WILDS dataset has 141,696 labeled images across these splits. We augment the dataset with 340,469 unlabeled images. These images come from two sources: + +1. We use a sequestered split of the dataset, which consists of new locations that are not in the original labeled FMOW-wILDS dataset; these unlabeled data are drawn from the same distribution as the labeled data. +2. For the unlabeled target and validation splits, we also add unlabeled data in their respective timeframes from the training set locations. While the unlabeled data from the Validation (OOD) and Target (OOD) domains can come from the same locations as the labeled training data, we note that none of the locations in the labeled Validation (OOD) or Target (OOD) data, which is used for evaluation, is shared with any of the unlabeled or labeled data used for training. + +Broader context. We focus on providing unlabeled data from the years (domains) that were in the original FMOW-wILDS dataset. Prior works have used unlabeled satellite imagery for pretraining (Xie et al., 2016; Jean et al., 2016; Xie et al., 2021a; Reed et al., 2021), self-training (Xie et al., 2021a), and semi-supervised learning (Reed et al., 2021). Leveraging unlabeled satellite imagery is powerful since it is widely available and can reduce the frequency at which we need to re-collect labeled data. + +# A.4 POVERTYMAP-WILDS + +The POVERTYMAP-WILDS dataset (Koh et al., 2021) was adapted from Yeh et al. (2020). The dataset consists of satellite images from 23 African countries, labeled with a village-level real-valued asset wealth index (measure of wealth). The labels are collected by conducting a nationally representative survey, which requires sending workers into the field to ask each household a number of questions and can be very expensive. In contrast, unlabeled satellite imagery is readily available across the globe. In this work, we augment the POVERTYMAP-WILDS dataset with satellite images from the same LandSat satellite. + +Problem setting. The task is to predict a real-valued asset wealth index from a satellite image. We consider generalizing across country borders (the dataset contains 5 different cross validation folds, each splitting the countries differently). The input $x$ is a multispectral LandSat satellite image with 8 channels (resized to $224 \times 224$ pixels). The output $y$ is a real-valued asset wealth index. The domain $d$ represents the country the image was taken in, as well as whether the image was taken at an urban or rural area. Each image also includes metadata on the location and time, although we do not make use of these except for defining the domains. Models are evaluated by the average Pearson correlation $(r)$ across 5 folds, as well as the lower of the Pearson correlations on the urban or rural + +
Split# Domains (countries × rural-urban)# Labeled ex.# Unlabeled ex.
Source9,797181,948
Validation (ID)1,0000
Target (ID)1,0000
Validation (OOD)5 × 23,90924,173
Target (OOD)5 × 23,96355,275
Total23 × 219,669261,396
+ +Table 6: Data for POVERTYMAP-WILDS (Fold A). Each domain corresponds to a different country and whether the image was from a rural or urban area. + +subpopulations to test generalization to these subpopulations. In particular, generalization to rural subpopulations is important as poverty is more common in rural areas. + +Data. We provide unlabeled data from same domains as the labeled POVERTYMAP-WILDS dataset (no extra domains). The domains are split as follows: + +1. Source: Images from training countries in the fold. +2. Validation (OOD): Images from validation countries in the fold. +3. Target (OOD): Images from test countries in the fold. + +All the countries in these splits are disjoint. Folds also contain a Validation (ID) and Target (ID) set with data from the training countries. + +The POVERTYMAP-WILDS dataset has 19,669 labeled images across these splits. We augment the dataset with 261,396 unlabeled images from the same 23 countries. These images are collected using the same process as Yeh et al. (2020) from the same LandSat satellite. The image locations are chosen to be roughly near survey locations from the Demographic and Health Surveys (DHS). + +Broader context. We focus on providing unlabeled data from the countries (domains) that were in the original POVERTYMAP-WILDS dataset. Prior works on poverty prediction have used pretraining on unlabeled data (to predict an auxiliary task such as nighttime light prediction) (Xie et al., 2016; Jean et al., 2016) and for semi-supervised learning via entropy minimization (Jean et al., 2018). However, these works focus on generalization to new locations in the countries in the training set. Poverty prediction is different from usual tasks in that the output is real-valued. Most methods for unlabeled data are made for classification tasks, and we hope that our dataset will encourage more work on methods for using unlabeled data for improving OOD performance in regression tasks. + +# A.5 GLOBALWHEAT-WILDS + +The GLOBALWHEAT-WILDS dataset was extended from the Global Wheat Head Dataset developed by David et al. (2020; 2021). The goal of the dataset is to localize wheat heads from field images to assist plant scientists to assess the density, size, and health of wheat heads in a particular wheat field. This imagery is acquired during different periods to cover the development of the vegetation, from the emergence to organ appearance. Examples in GLOBALWHEAT-WILDS are labeled by bounding box annotations of each wheat head in the image. Wheat heads are densely packed and overlapping, making object annotation highly tedious. Thus, the Global Wheat Head Dataset (GWHD) is relatively small, while in reality more field images are available. We supplement GLOBALWHEAT-WILDS with unlabeled examples from the same set of field vehicles and sensors but taken in different acquisition sessions, i.e., at different locations or the same location in a different year. The inclusion of this unlabeled data allows: 1) a much higher spatial coverage of a field location when the data comes from an acquisition session which is already included, 2) a much higher temporal resolution when the data comes from a location which is already included, so we have a larger range of wheat + +
Split# Domains (acquisition session)# Labeled examples# Unlabeled examples
Source2,9435,997
Validation (ID)183570
Target (ID)3570
Validation (OOD)81,4242,000
Target (OOD)211,4348,997
Extra53042,445
Total1006,51559,439
+ +Table 7: Data for GLOBALWHEAT-WILDS. + +growth stages, and 3) slightly more diversity when the session comes from a different location, but with the same image acquisition protocol (i.e., the same field vehicle and image sensor). + +Problem setting. The task is to localize wheat heads in high resolution overhead field images taken from above the crop canopy. We consider generalizing across acquisition sessions representing a particular location, time and sensor with which the images were captured. Variation across sessions includes changes in wheat genotype, wheat head appearance, growing conditions, background appearance, illumination and acquisition protocol. The input $x$ is an overhead outdoor image of wheat canopy, and the label $y$ is a set of box coordinates bounding the wheat heads (the spike at the top of the wheat plant holding grain), omitting any hair-like awns that may extend from the head. The domain $d$ designates an acquisition session, which corresponds to a certain location, time, and imaging sensor. + +Data. We provide unlabeled data from same domains as the labeled GLOBALWHEAT-WILDS dataset. Additionally, we provide unlabeled data from extra acquisition sessions not in the labeled GLOBALWHEAT-WILDS dataset (extra domains). The domains are split as follows: + +1. Source: 18 acquisition sessions in Europe (France $\times$ 13, Norway $\times$ 2, Switzerland, United Kingdom, Belgium). +2. Validation (OOD): 8 acquisition sessions: 7 in Asia (Japan $\times$ 4, China $\times$ 3) and 1 in Africa (Sudan). +3. Target (OOD): 21 acquisition sessions: 11 in Australia and 10 in North America (USA $\times$ 6, Mexico $\times$ 3, Canada). +4. Extra (OOD): 53 acquisition sessions distributed across the world. + +The source, validation, and target sessions are split by continent, while the extra sessions are taken from across the world. For acquisition sessions with both labeled and unlabeled data, we randomly selected new patches of $1024 \times 1024$ pixels from the original underlying data. The images were preprocessed in the same way as described in David et al. (2021). + +Broader context. Utilizing unlabeled data is relatively new in the context of plant phenotyping, due to the lack of a large, unlabeled database of plant images. However, larger plant image datasets are starting become available, such as from the Terraphenotyping Reference Platform (TERRA-Ref, Burnette et al. (2018)). Increasing the sample size and variation within plant datasets is an important goal, because plants from the same species are fairly self-similar within the same field and therefore increasing the number of locations, times and image types included in a dataset can be beneficial for making fine-grained visual classifications for plants. Further, for plant phenotyping to be used in farming applications, such as for precisely spraying weeds in a field with herbicide, models must be highly robust to variations between different fields. + +
SplitNameCountrySiteDateSensorStage#Labeled#Heads#Unlabeled
SourceArvalis_1FranceGréoux6/2/2018HandheldPF6629350
SourceArvalis_2FranceGréoux6/16/2018HandheldF401210030
SourceArvalis_3FranceGréoux7/1/2018HandheldF-R588218930
SourceArvalis_4FranceGréoux5/27/2019HandheldF20442700
SourceArvalis_5FranceVLB*6/6/2019HandheldF44881800
SourceArvalis_6FranceVSC*6/26/2019HandheldF-R16086980
SourceArvalis_7FranceVLB*6/1/2019HandheldF-R2412470
SourceArvalis_8FranceVLB*6/1/2019HandheldF-R2010620
SourceArvalis_9FranceVLB*6/1/2020HandheldR3218940
SourceArvalis_10FranceMons6/10/2020HandheldF6015631000
SourceArvalis_11FranceVLB*6/18/2020HandheldF6028180
SourceArvalis_12FranceGréoux6/15/2020HandheldF2912771000
SourceETHZ_1SwitzerlandEschikon6/6/2018SpidercamF747496030
SourceINRAE_1FranceToulouse5/28/2019HandheldF-R17636341000
SourceNMBU_1NorwayNMBU7/24/2020CartF827345999
SourceNMBU_2NorwayNMBU8/7/2020CartR985211998
SourceRres_1UKRothamsted7/13/2015GantryF-R432192100
SourceULiege_1BelgiumGembloux7/28/2020CartR3018471000
ValidationARC_1SudanWadMedani3/1/2021HandheldF3011690
ValidationNAU_1ChinaBaiman/aHandheldPF2012400
ValidationNAU_2ChinaBaima5/2/2020CartPF10049181000
ValidationNAU_3ChinaBaima5/9/2020CartF10045961000
ValidationUkyoto_1JapanKyoto4/30/2020HandheldPF6026700
ValidationUtokyo_1JapanTsukuba5/22/2018CartR538141850
ValidationUtokyo_2JapanTsukuba5/22/2018CartR456130100
ValidationUtokyo_3JapanHokkaido6/16/2021Handheldmultiple12030850
TargetCIMMYT_1MexicoCiudadObregon3/24/2020CartPF6928431000
TargetCIMMYT_2MexicoCiudadObregon3/19/2020CartPF7727711000
TargetCIMMYT_3MexicoCiudadObregon3/23/2020CartPF6015611000
TargetKSU_1USManhattan,KS5/19/2016TractorPF10064351000
TargetKSU_2USManhattan,KS5/12/2017TractorPF10053021000
TargetKSU_3USManhattan,KS5/25/2017TractorF9552171000
TargetKSU_4USManhattan,KS5/25/2017TractorR6032851000
TargetTerraref_1USMaricopa4/2/2020GantryR1443360997
TargetTerraref_2USMaricopa3/20/2020GantryF10612741000
TargetUQ_1AustraliaGatton8/12/2015TractorPF226400
TargetUQ_2AustraliaGatton9/8/2015TractorPF16390
TargetUQ_3AustraliaGatton9/15/2015TractorF142970
TargetUQ_4AustraliaGatton10/1/2015TractorF3010390
TargetUQ_5AustraliaGatton10/9/2015TractorF-R3036800
TargetUQ_6AustraliaGatton10/14/2015TractorF-R3011470
TargetUQ_7AustraliaGatton10/6/2020HandheldR1713350
TargetUQ_8AustraliaMcAllister10/9/2020HandheldR4148350
TargetUQ_9AustraliaBrookstead10/16/2020HandheldF-R3328860
TargetUQ_10AustraliaGatton9/22/2020HandheldF-R10686290
TargetUQ_11AustraliaGatton8/31/2020HandheldPF8443450
TargetUsask_1CanadaSaskatoon6/6/2018TractorF-R20059850
+ +Table 8: Source, validation, and test domains for GLOBALWHEAT-WILDS. + +
SplitNameCountrySiteDateSensorStage#Labeled#Heads#Unlabeled
ExtraArvalis_13FranceMons6/15/2018HandheldF-R00995
ExtraArvalis_14FranceGréoux5/25/2020HandheldF001000
ExtraArvalis_15FranceVLB*6/2/2020HandheldF001000
ExtraArvalis_16FranceGréoux6/22/2020HandheldF-R001000
ExtraArvalis_17FranceBignan5/18/2021HandheldF-R001000
ExtraArvalis_18FranceVLB*5/28/2021HandheldPF001000
ExtraArvalis_19FranceEncrambade6/2/2021HandheldF001000
ExtraArvalis_20FranceOLM*6/2/2021HandheldF001000
ExtraArvalis_21FranceEncrambade6/11/2021HandheldPF001000
ExtraArvalis_22FranceVLB*6/14/2021HandheldF001000
ExtraArvalis_23FranceOLM*6/17/2021HandheldF-R001000
ExtraCIMMYT_4MexicoCiudadObregon3/11/2020CartF001000
ExtraCIMMYT_5MexicoCiudadObregon3/12/2020CartF001000
ExtraCIMMYT_6MexicoCiudadObregon3/13/2020CartF001000
ExtraCIMMYT_7MexicoCiudadObregon3/13/2020CartF001000
ExtraCIMMYT_8MexicoCiudadObregon3/13/2020CartF001000
ExtraCIMMYT_9MexicoCiudadObregon3/19/2020CartF001000
ExtraCIMMYT_10MexicoCiudadObregon4/15/2020CartE001000
ExtraCIMMYT_11MexicoCiudadObregon4/22/2020CartE001000
ExtraCIMMYT_12MexicoCiudadObregon4/22/2020CartE001000
ExtraCIMMYT_13MexicoCiudadObregon4/22/2020CartE001000
ExtraCIMMYT_14MexicoCiudadObregon4/22/2020CartPF001000
ExtraCIMMYT_15MexicoCiudadObregon4/28/2020CartPF001000
ExtraCIMMYT_16MexicoCiudadObregon5/3/2020CartF-R001000
ExtraETHZ_2SwitzerlandEschikon6/6/2018SpidercamF00750
ExtraINRAE_2FranceClermont-Ferrand5/29/2019HandheldF001000
ExtraKSU_5USManhattan,KS5/4/2016TractorF001000
ExtraKSU_6USManhattan,KS4/23/2017TractorP-F001000
ExtraRres_2UKRothamsted7/7/2015GantryR001000
ExtraRres_3UKRothamsted7/10/2015GantryF001000
ExtraRres_4UKRothamsted7/13/2015GantryF-R001000
ExtraRres_5UKRothamsted7/20/2015GantryF-R001000
ExtraULiège_2BelgiumGembloux6/11/2020CartPF001000
ExtraULiège_3BelgiumGembloux6/15/2020CartF001000
ExtraULiège_4BelgiumGembloux6/16/2020CartF001000
ExtraULiège_5BelgiumGembloux6/18/2020CartF001000
ExtraULiège_6BelgiumGembloux6/23/2020CartF001000
ExtraULiège_7BelgiumGembloux6/26/2020CartF001000
ExtraULiège_8BelgiumGembloux7/7/2020CartF-R001000
ExtraULiège_9BelgiumGembloux7/13/2020CartF-R001000
ExtraUsask_2CanadaSaskatchewan8/6/2019TractorF00800
ExtraUsask_3CanadaSaskatchewan8/12/2019TractorF-R00800
ExtraUtokyo_4JapanHokkaido6/7/2021HandheldPF00100
ExtraUtokyo_5JapanHokkaido6/9/2021HandheldF00100
ExtraUtokyo_6JapanHokkaido6/16/2021HandheldPF00100
ExtraUtokyo_7JapanHokkaido6/23/2021HandheldF00100
ExtraUtokyo_8JapanHokkaido7/3/2021HandheldF00100
ExtraUtokyo_9JapanHokkaido7/10/2021HandheldF00100
ExtraUtokyo_10JapanHokkaido7/10/2021HandheldF-R00100
ExtraUtokyo_11JapanHokkaido7/11/2021HandheldF-R00100
ExtraUtokyo_12JapanHokkaido7/20/2021HandheldR00100
ExtraUtokyo_13JapanHokkaido7/20/2021HandheldR00100
ExtraUtokyo_14JapanHokkaido7/28/2021HandheldR00100
+ +Table 9: Extra domains for GLOBALWHEAT-WILDS. + +
Split# Domains (scaffolds)# Labeled examples# Unlabeled examples
Source44,930350,3434,052,627
Validation (OOD)31,36143,793430,325
Target (OOD)43,79343,793517,048
Total120,084437,9295,000,000
+ +Table 10: Data for OGB-MOLPCBA. Each domain corresponds to a different molecule scaffold structure. + +# A.6 OGB-MolPCBA + +The OGB-MOLPCBA dataset was adapted from the Open Graph Benchmark (Hu et al., 2020b) and originally curated by the MoleculeNet (Wu et al., 2018) from the PubChem database (Bolton et al., 2008). The dataset is a collection of molecules annotated with 128 kinds of binary labels indicating the outcome of different biological assays. Performing biological assays is expensive, and as a result, the assay labels are only sparsely available over a tiny portion of the molecules curated in the large-scale PubChem database (Bolton et al., 2008). On the other hand, unlabeled molecule data is abundant and readily available from the database. Prior work in graph machine learning has leveraged unlabeled molecules to perform pre-training (Hu et al., 2020c) and semi-supervised learning (Sun et al., 2020). In this work, we augment the OGB-MOLPCBA dataset with unlabeled molecules subsampled from the PubChem database. + +Problem setting. The task is multi-task molecule classification, and we consider generalizing to new molecule scaffold structures at test time. The input $x$ corresponds to a molecular graph (where nodes are atoms and edges are chemical bounds), the label $y$ is a 128-dimensional binary vector, representing the binary outcomes of the biological assay results. $y$ could contain NaN values, indicating that the corresponding biological assays were not performed on the given molecule. The domain $d$ indicates the scaffold group a molecule belongs to. As the binary labels are highly-skewed, the model's classification performance is evaluated using the Average Precision. + +Data. All of the labeled and unlabeled data are taken from the PubChem database (Bolton et al., 2008). We provide unlabeled data from same domains as the labeled OGB-MOLPCBA dataset (no extra domains). We curate the unlabeled data by randomly sampling 5 million molecules from the PubChem database. We then assign these unlabeled molecules to the existing labeled scaffold groups that contain the most similar molecules. Specifically, we first compute the 1024-dimensional Morgan fingerprints for all the molecules (Rogers & Hahn, 2010; Landrum et al., 2006). Then, for each unlabeled molecule, we compute its Jaccard similarity against all the labeled molecules in OGB-MOLPCBA and obtain a labeled molecule with the highest Jaccard similarity. Finally, we assign the unlabeled molecule to the scaffold group that the most similar labeled molecule belongs to. This way, the molecules within the same scaffold groups are structurally similar to each other. + +The domains in the OGB-MOLPCBA dataset are as follows: + +1. Source: 44,930 scaffold groups. +2. Validation (OD): 31,361 scaffold groups. +3. Target (OOD): 43,793 scaffold groups. + +The largest scaffolds are in the source split and the smallest scaffolds in the target split. We assign all of the unlabeled molecules to the existing domains, so there are no extra domains added. + +While the unlabeled data are similar to the labeled data in that they were all derived from PubChem (Bolton et al., 2008), it is quite possible that there was some selection bias in which molecules in PubChem were chosen to be labeled, which would lead to an undocumented distribution shift between the unlabeled and labeled datasets. + +Broader context. We focused on providing unlabeled data for both training and OOD test domains. Unlabeled molecules can be used to develop and evaluate methods for domain adaptation, self-training, as well as pre-training (Hu et al., 2020c) and semi-supervised learning (Sun et al., 2020). In terms of future directions, we think it is fruitful to explore both graph-agnostic methods (e.g., pseudo-label training) and more graph-specific methods (e.g., self-supervised learning of graph neural networks (Xie et al., 2021b)). + +# A.7 CIVIL COMMENTS-WILDS + +The CIVIL COMMENTS-WILDS dataset (Koh et al., 2021) was adapted from the CivilComments dataset (Borkan et al., 2019), which is a collection of text comments made on online articles. The data in CIVIL COMMENTS-WILDS underwent a significant labeling and annotation process: each example was labeled toxic or non-toxic and annotated for whether they mentioned certain demographic identities by at least 10 crowdworkers. Such a substantial labeling and identity annotation process is expensive and time-consuming. On the other hand, unlabeled, unannotated text comments are readily available. For example, CIVIL COMMENTS-WILDS only contains a subset of all data available in the original CivilComments dataset (Borkan et al., 2019), most of which Koh et al. (2021) excluded because these examples were not annotated for mentioning identities. In this work, we augment the CIVIL COMMENTS-WILDS dataset with these unlabeled, unannotated comments. + +Problem setting. The task is to classify whether a text comment is toxic or not. The input $x$ is a text comment (at least one sentence long) originally made on an online article, the label $y$ is a binary indicator of whether the comment is rated toxic or not, and the domain $d$ is an 8-dimensional binary vector, where each dimension corresponds to whether the comment mentions each of 8 demographic identities: male, female, LGBTQ, Christian, Muslim, other religions, Black, or White, respectively. Each comment also includes metadata on which article the comment was made on, although we do not use this metadata for training or evaluation. + +We consider the subpopulation shift setting, where the model must perform well across all subpopulations, which are defined based on $d$ . Koh et al. (2021) define 16 subpopulations (groups) based on $d$ . Models are then evaluated by their worst-group accuracy, i.e., the lowest accuracy over the 16 groups considered. In our work, we use the same evaluation setup. + +Data. All of the labeled and unlabeled data are taken from the CivilComments dataset (Borkan et al., 2019). After preprocessing, Koh et al. (2021) created the CIVIL COMMENTS-WILDS dataset using the 448,000 examples that were fully annotated for both toxicity $y$ and the mention of demographic identities $d$ . In this work, we augment CIVIL COMMENTS-WILDS with an additional 1,551,515 examples collected by Borkan et al. (2019). We use these examples as unlabeled data. We follow the same preprocessing steps as was done with the labeled data in Koh et al. (2021). The resulting unlabeled examples have no identity annotations $d$ and no toxicity label $y$ . We note that Borkan et al. (2019) actually do provide toxicity labels for these examples in the original CivilComments dataset, but we ignore these labels and use them neither for training nor evaluation. + +Because our unlabeled examples have no identity annotations, we cannot group these examples as Koh et al. (2021) group the labeled examples; thus we refer to this data as unlabeled data coming from extra domains (Table 11). In practice, these comments may actually mention any number of identities. + +A substantial amount (1,427,848 or $92\%$ ) of the unlabeled comments are drawn from the same articles as the labeled comments. In particular, 140,082 unlabeled comments are from the same articles as labeled comments in the test split. + +CIVIL COMMENTS-WILDS exhibits class imbalance. We account for this when benchmarking methods by sampling class-balanced batches of labeled data when applicable (see Appendix B). + +Broader context. In this work, we focused on supplementing CIVIL COMMENTS-WILDS with extra unannotated data from the original CivilComments dataset (Borkan et al., 2019). In practice, unannotated text comments are widely available on the internet. Whether using such unlabeled data, as we do in this work, can help with bias is still an open question. Previous work suggests that training on large amounts of data alone is not sufficient to avoid unwanted biases, since many + +
Split# Domains (label × identity groups)# Labeled examples# Unlabeled examples
Source16269,0380
Validation1645,1800
Target16133,7820
Extra101,551,515
Total16448,0001,551,515
+ +Table 11: Data for CIVIL COMMENTS-WILDS. All of the splits are identically distributed. + +papers have pointed out biases in large language models (Abid et al., 2021; Nadeem et al., 2020; Gehman et al., 2020). However, recent work has also suggested that pre-trained models can be trained to be more robust against some types of spurious correlations (Hendrycks et al., 2020; Tu et al., 2020) and that additional domain- and task-specific pre-training (Gururangan et al., 2020) can also improve performance. We hope our contributions to the CIVIL COMMENTS-WILDS dataset can encourage future study on whether unlabeled data can be leveraged to improve generalization across subpopulation shifts. + +# A.8 AMAZON-WILDS + +The Amazon-WILDS dataset (Koh et al., 2021) was adapted from the Amazon reviews dataset (Ni et al., 2019), which is a collection of product reviews written by reviewers. While Amazon reviews are always labeled by the star ratings in practice, unlabeled data is a common source of leverage more generally for sentiment classification, with prior work in domain adaptation (Blitzer & Pereira, 2007; Glorot et al., 2011) and semi-supervised learning (Dasgupta & Ng, 2009; Li et al., 2011). In this work, we augment the Amazon-WILDS dataset with unlabeled reviews, whose star ratings have been removed. + +Problem setting. The task is sentiment classification, and we consider generalizing from a set of reviewers to new reviewers at test time. The input $x$ corresponds to a review text, the label $y$ is the star rating from 1 to 5, and the domain $d$ identifies which user wrote the review. For each review, additional metadata (product ID, product category, review time, and summary) are also available. Because the goal is to train a model that performs well across a wide range of reviewers, models are evaluated by their tail performance, concretely, their accuracy on the user at the 10th percentile. + +Data. All of the labeled and unlabeled data are taken from the Amazon reviews dataset (Ni et al., 2019). We provide unlabeled data from same domains as the labeled Amazon-WILDS dataset. Additionally, we provide unlabeled data from extra reviewers not in the labeled Amazon-WILDS dataset (extra domains). The domains are split as follows: + +1. Source: 1,252 reviewers. +2. Validation (OD): 1,334 reviewers. +3. Target (OD): 1,334 reviewers. +4. Extra (OOD): 21,694 reviewers. + +The reviewers in each split are distinct, and all reviewers have at least 75 reviews. The distributions of reviewers in each split are identical. AMAZON-WILDS also includes Validation (ID) and Target (ID) sets which contain data from the source reviewers. + +The AMAZON-WILDS dataset has a total of 539,502 labeled reviews across these splits, and we augment the dataset with a total of 3,462,668 unlabeled reviews. For each split of the unlabeled data, we include all available reviews that are written by the reviewer. For the Extra (OOD) split, we include all reviewers with at least 75 reviews that are not in Source, Validation (OOD), or Target (OOD) splits. + +
Split# Domains (reviewers)# Labeled examples# Unlabeled examples
Source245,5020
Validation (ID)1,25246,9500
Target (ID)46,9500
Validation (OOD)1,334100,050266,066
Target (OOD)1,334100,050268,761
Extra (OOD)21,69402,927,841
Total25,614539,5023,462,668
+ +Table 12: Data for AMAZON-WILDS. Each domain corresponds to a different reviewer. + +To filter and process reviews, we followed the same data processing steps as for the labeled data in AMAZON-WILDS (Koh et al., 2021). + +Broader context. We focused on providing unlabeled data from OOD domains, including both test and extra domains. Unlabeled data from the test reviewers can be used to develop and evaluate methods for domain adaptation (Ren et al., 2018; Zhang et al., 2019a; Koohbanani et al., 2021), which has been well-studied in the context of sentiment classification (Blitzer & Pereira, 2007; Glorot et al., 2011). While there is limited prior work on leveraging unlabeled data from extra domains, some domain adaptation techniques can be readily adapted to leverage such unlabeled data (Ganin et al., 2016). Finally, we focus on unlabeled data specific to the task in this work, varying only the domains, and this contrasts with the type of unlabeled data used for pre-training in NLP, which is much larger and more diverse (Devlin et al., 2019; Brown et al., 2020). + +# B ALGORITHM DETAILS + +# B.1 EMPIRICAL RISK MINIMIZATION (ERM) + +As a baseline, we consider Empirical Risk Minimization (ERM). ERM ignores unlabeled data and minimizes the average labeled loss. We additionally evaluate ERM with strong data augmentation on applicable datasets, i.e., on IWILDCAM2020-wILDS, CAMELYON17-wILDS, POVERTYMAP-wILDS, and FMOW-wILDS (see Appendix C). ERM with strong data augmentation learns a model $h$ that minimizes the labeled training loss + +$$ +L _ {\mathrm {L}} (h) = \frac {1}{n _ {\mathrm {L}}} \sum_ {i = 1} ^ {n _ {\mathrm {L}}} \ell \left(h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {L}} ^ {(i)}\right), y _ {\mathrm {L}} ^ {(i)}\right), \tag {1} +$$ + +where $A_{\mathrm{strong}}$ is a stochastic data augmentation operation, and $\ell$ measures the prediction loss. We use $L_{\mathrm{L}}$ throughout this appendix to refer to the above labeled loss with strong augmentations (on applicable datasets). + +For all dataset except CIVIL COMMENTS-WILDS, we sample labeled batches uniformly at random. In our experiments, we account for class imbalance in CIVIL COMMENTS-WILDS by explicitly sampling class-balanced batches of labeled data when computing $L_{\mathrm{L}}(h)$ . + +# B.2 DOMAIN-INVARIANT METHODS + +Domain-invariant methods seek to learn feature representations that are invariant across domains. These methods are motivated by earlier theoretical results showing that the gap between in- and out-of-distribution performance depends on some measure of divergence between the source and target distributions (Ben-David et al., 2010). To minimize this divergence, the methods described below penalize divergence between feature representations across domains, i.e., they encourage the model to produce feature representations that are similar across domains. + +Consider a model $h = g \circ f$ , where the featurizer $f: \mathcal{X} \to \mathcal{F}$ maps the inputs to some feature space, and the head $g: \mathcal{F} \to \mathcal{V}$ maps feature representations to prediction targets. Domain-invariant methods seek to constrain $f$ to output similar representations for labeled and unlabeled data. + +In this work, we adapt all of our domain-invariant methods to use data augmentations on applicable datasets (see Appendix C), and thus the output of $f$ on the labeled batch is + +$$ +B _ {\mathrm {L}} = \left\{f \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {L}} ^ {(i)}\right): i \in (1, \dots , n _ {\mathrm {L}}) \right\} \tag {2} +$$ + +Similarly, the output of $f$ on an unlabeled batch is + +$$ +B _ {\mathrm {U}} = \left\{f \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {U}} ^ {(i)}\right): i \in (1, \dots , n _ {\mathrm {U}}) \right\} \tag {3} +$$ + +Domain-invariant methods seek to minimize some divergence $\xi : \mathcal{F} \times \mathcal{F} \to \mathbb{R}$ between the labeled data $B_{\mathrm{L}}$ and the unlabeled data $B_{\mathrm{U}}$ , where the choice of divergence depends on the specific method. The divergence is expressed as a penalty term: + +$$ +L _ {\text {p e n a l t y}} (f) = \xi \left(B _ {\mathrm {L}}, B _ {\mathrm {U}}\right) \tag {4} +$$ + +The final objective is a combination of the labeled loss and penalty loss. The balance between the two losses is controlled by hyperparameter $\lambda$ , the penalty weight. + +$$ +L (h) = L _ {\mathrm {L}} (h) + \lambda L _ {\text {p e n a l t y}} (f) \tag {5} +$$ + +In our experiments, we study two classical domain-invariant methods, Correlation Alignment (CORAL) (Sun et al., 2016; Sun & Saenko, 2016) and Domain-Adversarial Neural Networks (DANN) (Ganin et al., 2016). These methods are well-known and established, but their performance can be lower than that of newer domain-invariant methods that employ different penalties to encourage the source and target representations to be similar (Jiang et al., 2020; Zhang et al., 2021). Examples of these newer methods are Joint Adaptation Networks (JAN) (Long et al., 2017), Conditional Domain Adversarial Networks (CDAN) (Long et al., 2018), Collaborative and Adversarial + +Networks (CAN) (Zhang et al., 2018), and models with Adaptive Feature Norm (AFN) (Xu et al., 2019), as well as methods that minimize the Maximum Classifier Discrepancy (MCD) (Saito et al., 2018) and the Margin Disparity Discrepancy (MDD) (Zhang et al., 2019b). + +All of the above methods were developed for the single-source single-target setting, where the source domain is treated as a single distribution, and likewise for the target domain. As each WILDS 2.0 dataset comprises multiple source domains and multiple target domains, it is likely that methods that can leverage this additional structure could perform better. Examples of these methods include Multi-source Domain Adversarial Networks (MDAN) (Zhao et al., 2018) and Moment Matching for Multi-Source Domain Adaptation (M3SDA) (Peng et al., 2019). The DomainBed (Gulrajani & Lopez-Paz, 2020) and WILDS (Koh et al., 2021) benchmarks also extended single-source algorithms like CORAL and DANN to take advantage of multiple source domains in the domain generalization setting, and similar extensions in the domain adaptation setting could be promising. + +Correlation Alignment (CORAL). Algorithm 1 describes CORAL, proposed by Sun et al. (2016); Sun & Saenko (2016). CORAL measures the divergence $\xi$ between batches of feature representations in terms of the deviation between their first and second order statistics. Given a labeled batch and unlabeled batch of features $B_{\mathrm{L}} \in \mathbb{R}^{n_{\mathrm{L}} \times m}$ , $B_{\mathrm{U}} \in \mathbb{R}^{n_{\mathrm{U}} \times m}$ , define the feature means as + +$$ +\mu_ {\mathrm {L}} = \frac {1}{n _ {\mathrm {L}}} 1 ^ {T} B _ {\mathrm {L}} \tag {6} +$$ + +$$ +\mu_ {\mathrm {U}} = \frac {1}{n _ {\mathrm {U}}} 1 ^ {T} B _ {\mathrm {U}} \tag {7} +$$ + +and covariance matrices as + +$$ +C _ {\mathrm {L}} = \frac {1}{n _ {\mathrm {L}} - 1} \left(B _ {\mathrm {L}} ^ {T} B _ {\mathrm {L}} - \frac {1}{n _ {\mathrm {L}}} \left(1 ^ {T} B _ {\mathrm {L}}\right) ^ {T} \left(1 ^ {T} B _ {\mathrm {L}}\right)\right) \tag {8} +$$ + +$$ +C _ {\mathrm {U}} = \frac {1}{n _ {\mathrm {U}} - 1} \left(B _ {\mathrm {U}} ^ {T} B _ {\mathrm {U}} - \frac {1}{n _ {\mathrm {U}}} \left(1 ^ {T} B _ {\mathrm {U}}\right) ^ {T} \left(1 ^ {T} B _ {\mathrm {U}}\right)\right). \tag {9} +$$ + +We then compute the CORAL penalty as + +$$ +\xi \left(B _ {\mathrm {L}}, B _ {\mathrm {U}}\right) = \left\| \mu_ {\mathrm {L}} - \mu_ {\mathrm {U}} \right\| ^ {2} + \left\| C _ {\mathrm {L}} - C _ {\mathrm {U}} \right\| _ {F} ^ {2}. \tag {10} +$$ + +We adapted our implementation from DomainBed (Gulrajani & Lopez-Paz, 2020), as done in WILDS 1.0. We note that these implementations compute the penalty as a sum of deviations in means and covariances, whereas Sun et al. (2016); Sun & Saenko (2016) penalize deviations in covariances only. (Sun et al. (2016) considers features that are normalized to zero mean.) On applicable datasets, we also strongly augmented all labeled and unlabeled examples using $A_{\text{strong}}$ , whereas Sun et al. (2016); Sun & Saenko (2016) do not explicitly require data augmentations. We add augmentations to allow for a fairer comparison to other methods which use augmentations. + +Note that CORAL has also been adapted by Gulrajani & Lopez-Paz (2020); Koh et al. (2021) for domain generalization. In particular, where the original CORAL paper defines $L_{\text{penalty}}$ as the divergence between just two kinds of batches (labeled and unlabeled), these works define $L_{\text{penalty}}$ as the divergence between many kinds of batches, where batches are grouped based on domain annotation $d^{(i)}$ . For simplicity, we followed the original CORAL formulation and differentiate only between labeled and unlabeled batches. We leave leveraging the domain adaptations $d$ to future work. + +APPLICABLE DATSETS. We run CORAL on all datasets except GLOBALWHEAT-WILDS and CIVIL COMMENTS-WILDS. We do not evaluate domain invariant methods on CIVIL COMMENTS-WILDS, since the labeled and unlabeled data are drawn from the same distribution. We do not evaluate CORAL on GLOBALWHEAT-WILDS because CORAL does not port straightforwardly to detection settings. + +DANN. Algorithm 2 describes DANN, proposed by Ganin et al. (2016). DANN measures the divergence $\xi$ between batches of feature representations using the performance of a discriminator network $h_d$ that aims to discriminate between domains. Given a batch of features (either $B_{\mathrm{L}}$ or $B_{\mathrm{U}}$ ), + +# Algorithm 1: CORAL + +Input: Labeled batch $\{(x_{\mathrm{L}}^{(i)},y_{\mathrm{L}}^{(i)},d_{\mathrm{L}}^{(i)}):i\in (1,\dots ,n_{\mathrm{L}})\}$ , unlabeled batch + +$\{(x_{\mathrm{U}}^{(i)},d_{\mathrm{U}}^{(i)}):i\in (1,\dots ,n_{\mathrm{U}})\}$ strong augmentation function $A_{\mathrm{strong}}$ , penalty weight $\lambda \in \mathbb{R}$ , dimension of feature representations $m$ + +1 Compute feature representations for labeled and unlabeled batches + +$$ +B _ {\mathrm {L}} = \left\{f \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {L}} ^ {(i)}\right): i \in (1, \dots , n _ {\mathrm {L}}) \right\} +$$ + +$$ +B _ {\mathrm {U}} = \left\{f \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {U}} ^ {(i)}\right): i \in (1, \dots , n _ {\mathrm {U}}) \right\} +$$ + +2 Compute feature mean and covariances for labeled and unlabeled batches + +$$ +\mu_ {\mathrm {L}} = \frac {1}{n _ {\mathrm {L}}} 1 ^ {T} B _ {\mathrm {L}} +$$ + +$$ +\mu_ {\mathrm {U}} = \frac {1}{n _ {\mathrm {U}}} 1 ^ {T} B _ {\mathrm {U}} +$$ + +$$ +C _ {\mathrm {L}} = \frac {1}{n _ {\mathrm {L}} - 1} \left(B _ {\mathrm {L}} ^ {T} B _ {\mathrm {L}} - \frac {1}{n _ {\mathrm {L}}} \left(1 ^ {T} B _ {\mathrm {L}}\right) ^ {T} \left(1 ^ {T} B _ {\mathrm {L}}\right)\right) +$$ + +$$ +C _ {\mathrm {U}} = \frac {1}{n _ {\mathrm {U}} - 1} \left(B _ {\mathrm {U}} ^ {T} B _ {\mathrm {U}} - \frac {1}{n _ {\mathrm {U}}} \left(1 ^ {T} B _ {\mathrm {U}}\right) ^ {T} \left(1 ^ {T} B _ {\mathrm {U}}\right)\right) +$$ + +3 Update model $h = g \circ f$ on loss + +$$ +\frac {1}{n _ {\mathrm {L}}} \sum_ {i = 1} ^ {n _ {\mathrm {L}}} \ell \left(h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {L}} ^ {(i)}\right), y _ {\mathrm {L}} ^ {(i)}\right) + \lambda \left(\left| \left| \mu_ {\mathrm {L}} - \mu_ {\mathrm {U}} \right| \right| ^ {2} + \left| \left| C _ {\mathrm {L}} - C _ {\mathrm {U}} \right| \right| _ {F} ^ {2}\right) +$$ + +this deep network $h_d$ must classify whether examples are from the labeled data or unlabeled data. $h_d$ is optimized using a binary classification loss + +$$ +L \left(h _ {d}\right) = \frac {1}{n _ {\mathrm {L}}} \sum_ {i = 1} ^ {n _ {\mathrm {L}}} \ell \left(h _ {d} \circ f \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {L}} ^ {(i)}\right), 1\right) + \frac {1}{n _ {\mathrm {U}}} \sum_ {i = 1} ^ {n _ {\mathrm {U}}} \ell \left(h _ {d} \circ f \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {U}} ^ {(i)}\right), 0\right) \tag {11} +$$ + +The loss of $h_d$ is exactly related to $\xi$ as + +$$ +\xi \left(B _ {\mathrm {L}}, B _ {\mathrm {U}}\right) = - L \left(h _ {d}\right) \tag {12} +$$ + +In other words, at the same time that $h_d$ is optimized to minimize its loss $L(h_d)$ , the featurizer $f$ is incentivized to minimize $L_{\text{penalty}} = \xi(B_{\mathrm{L}}, B_{\mathrm{U}}) = -L(h_d)$ , or maximize $L(h_d)$ . See Algorithm 2 for details. + +We adapted our implementation from the Transfer Learning Library (Jiang et al., 2020) and matched all details to the formulation given by Ganin et al. (2016), except for two changes. On applicable datasets, we have strongly all labeled and unlabeled examples using $A_{\mathrm{strong}}$ , whereas Ganin et al. (2016) do not explicitly require data augmentations. We add augmentations to allow for a fairer comparison to other methods which use augmentations. Second, where Ganin et al. (2016) optimize $f$ , $g$ , and $h_d$ using the same learning rate $\eta$ , we use three different learning rates $\eta_f, \eta_g, \eta_{h_d}$ , following the implementation of the Transfer Learning Library (Jiang et al., 2020). + +APPLICABLE DATSETS. We run DANN on all datasets except GLOBALWHEAT-WILDS and CIVIL COMMENTS-WILDS. We do not evaluate domain invariant methods on CIVIL COMMENTS-WILDS, since the labeled and unlabeled data are drawn from the same distribution. We do not evaluate DANN on GLOBALWHEAT-WILDS because DANN does not port straightforwardly to detection settings. + +# B.3 SELF-TRAINING METHODS + +Self-training methods leverage unlabeled data by "pseudo-labeling" unlabeled examples with the model's own predictions and training on them as if they were labeled examples. In certain for- + +# Algorithm 2: DANN + +Input: Labeled batch $\{(x_{\mathrm{L}}^{(i)},y_{\mathrm{L}}^{(i)},d_{\mathrm{L}}^{(i)}):i\in (1,\dots ,n_{\mathrm{L}})\}$ , unlabeled batch + +$\{(x_{\mathrm{U}}^{(i)},d_{\mathrm{U}}^{(i)}):i\in (1,\dots ,n_{\mathrm{U}})\}$ strong augmentation function $A_{\mathrm{strong}}$ , penalty weight $\lambda \in \mathbb{R}$ , learning rates $\eta_f,\eta_g,\eta_{h_d}$ + +1 Compute loss for domain discriminator $h_d$ + +$$ +L \left(h _ {d}\right) = \frac {1}{n _ {\mathrm {L}}} \sum_ {i = 1} ^ {n _ {\mathrm {L}}} \ell \left(h _ {d} \circ f \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {L}} ^ {(i)}\right), 1\right) + \frac {1}{n _ {\mathrm {U}}} \sum_ {i = 1} ^ {n _ {\mathrm {U}}} \ell \left(h _ {d} \circ f \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {U}} ^ {(i)}\right), 0\right) +$$ + +2 Compute loss for model $h = g\circ f$ + +$$ +\frac {1}{n _ {\mathrm {L}}} \sum_ {i = 1} ^ {n _ {\mathrm {L}}} \ell \left(h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {L}} ^ {(i)}\right), y _ {\mathrm {L}} ^ {(i)}\right) - \lambda L \left(h _ {d}\right) +$$ + +3 Update $f,g,h_d$ using appropriate learning rates $\eta_f,\eta_g,\eta_{h_d}$ + +mutations, this is equivalent to minimizing the model's conditional entropy on the unlabeled data (Grandvalet & Bengio, 2005). Contemporary self-training methods also often make use of consistency regularization, i.e., encouraging the model to make similar predictions on noisy/augmented versions of unlabeled examples. Self-training methods have recently been shown to be empirically successful at unsupervised domain adaptation (Saito et al., 2017; Berthelot et al., 2021; Zhang et al., 2021). + +The self-training methods we study follow this general structure: given an unlabeled example $x_{\mathrm{U}}$ , these algorithms generate a pseudolabel $\tilde{y}_{\mathrm{U}} = \psi(x_{\mathrm{U}})$ , where the pseudolabel-generating function $\psi : \mathcal{X} \to \mathcal{Y}$ differs between algorithms. For classification problems, we study algorithms that produce hard pseudolabels, which are one-hot class predictions, rather than soft pseudolabels, which are continuous distributions over the classes. Next, algorithms define an unlabeled loss $L_{\mathrm{U}}(h)$ for model $h$ by minimizing the loss between pseudolabels $\tilde{y}_{\mathrm{U}}$ and the model's predictions. The algorithms we consider below augment $x_{\mathrm{U}}$ during training; i.e., rather than minimizing the loss between $\tilde{y}_{\mathrm{U}}$ and the model's prediction on $x_{\mathrm{U}}$ , the algorithms below minimize the loss of predictions on $A(x_{\mathrm{U}})$ , where $A$ is a stochastic, label-preserving augmentation. Assuming model $h$ that outputs real-valued logits, the complete unlabeled loss is + +$$ +L _ {\mathrm {U}} (h) = \frac {1}{n _ {\mathrm {U}}} \sum_ {i = 1} ^ {n _ {\mathrm {U}}} \ell \left(h \circ A \left(x _ {\mathrm {U}} ^ {(i)}\right), \tilde {y} _ {\mathrm {U}} ^ {(i)}\right) \tag {13} +$$ + +This unlabeled loss is jointly optimized with the standard ERM labeled loss. The balance between the two losses is controlled by hyperparameter $\lambda(t)$ , which is a function of the current step $t$ . + +$$ +L (h) = L _ {\mathrm {L}} (h) + \lambda (t) L _ {\mathrm {U}} (h) \tag {14} +$$ + +Pseudo-Label. Algorithm 3 describes Pseudo-Label, proposed by Lee (2013). In this algorithm, the model dynamically generates pseudolabels and updates each batch. Formally, the pseudolabel-generating function $\psi$ is given by + +$$ +\tilde {y} _ {\mathrm {U}} = \psi \left(x _ {\mathrm {U}}\right) = \underset {y} {\arg \max } h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {U}}\right) [ y ] \tag {15} +$$ + +where $A_{\mathrm{strong}}$ is the strong augmentation function described in Appendix C. Pseudo-Label then computes the loss between a strongly augmented example and its associated pseudolabel. + +In order to more fairly compare Pseudo-Label to FixMatch, we add on confidence thresholding to the Pseudo-Label algorithm, a feature also added in the implementation of Pseudo-Label by Sohn et al. (2020). When confidence thresholding, examples on which the model has low confidence have zero loss, i.e., for some threshold hyperparameter $\tau$ , the loss an example $x_{\mathrm{U}}$ contributes is + +$$ +\left. \mathbf {1} \left\{\text {S o f t m a x} \left(\max _ {y} h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {U}}\right) [ y ]\right) \geq \tau \right\} \cdot \ell \left(h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {U}}\right), y _ {\mathrm {U}}\right) \right. \tag {16} +$$ + +Finally, Pseudo-Label increases the balance $\lambda(t)$ between labeled and unlabeled losses over time, initially placing 0 weight on $L_{\mathrm{U}}(h)$ and then linearly stepping the unlabeled loss weight until it reaches the full value of hyperparameter $\lambda$ at some threshold step. We fix the step at which $\lambda(t)$ reaches its maximum value ( $\lambda$ ) to be $40\%$ of the total number of training steps, matching the implementation of Sohn et al. (2020). This scheduling allows the algorithm to initially prioritize the labeled loss, as generated pseudolabels are mostly incorrect while the model has low accuracy. Formally, at step $t$ and given total number of steps $T$ , + +$$ +\lambda (t) = \min \left\{\frac {t}{0 . 4 T}, 1 \right\} \cdot \lambda \tag {17} +$$ + +We add augmentations to Pseudo-Label in order to allow for a fairer comparison to other methods that use augmentations. On applicable datasets, we have strongly augmented all labeled and unlabeled examples using $A_{\mathrm{strong}}$ , whereas Lee (2013) do not use any data augmentations, i.e., all instances of $A_{\mathrm{strong}}$ are replaced with the identity function. + +# Algorithm 3: Pseudo-Label + +Input: Labeled batch $\{(x_{\mathrm{L}}^{(i)},y_{\mathrm{L}}^{(i)},d_{\mathrm{L}}^{(i)}):i\in (1,\dots ,n_{\mathrm{L}})\}$ , unlabeled batch + +$\{(x_{\mathrm{U}}^{(i)},d_{\mathrm{U}}^{(i)}):i\in (1,\dots ,n_{\mathrm{U}})\}$ , strong augmentation function $A_{\mathrm{strong}}$ , unlabeled loss weight for current step $\lambda (t)\in \mathbb{R}$ , confidence threshold $\tau \in [0,1]$ + +1 Generate pseudolabels $\tilde{y}_{\mathrm{U}} = \arg \max_{y}h\circ A_{\mathrm{strong}}(x_{\mathrm{U}})[y]$ for the unlabeled data +2 Update model $h$ on loss + +$$ +\begin{array}{l} \frac {1}{n _ {\mathrm {L}}} \sum_ {i = 1} ^ {n _ {\mathrm {L}}} \ell \left(h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {L}} ^ {(i)}\right), y _ {\mathrm {L}} ^ {(i)}\right)) \\ + \frac {\lambda (t)}{n _ {\mathrm {U}}} \sum_ {i = 1} ^ {n _ {\mathrm {U}}} \mathbf {1} \left\{\operatorname {S o f t m a x} \left(\max _ {y} h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {U}}\right) [ y ]\right) \geq \tau \right\} \cdot \ell \left(h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {U}}\right), \tilde {y} _ {\mathrm {U}}\right) \\ \end{array} +$$ + +APPLICABLE DATSETS. We evaluate Pseudo-Label on all datasets except POVERTYMAP-WILDS, as POVERTYMAP-WILDS is a regression dataset, and hard pseudolabeling does not port straightforwardly to regression tasks. + +FixMatch. Algorithm 4 describes FixMatch, proposed by Sohn et al. (2020). Like Pseudo-Label, this algorithm dynamically generates pseudolabels and updates each batch. FixMatch additionally employs consistency regularization on the unlabeled data. While pseudolabels are generated on a weakly augmented view of the unlabeled examples, the loss is computed with respect to predictions on a strongly augmented view. This encourages models to make predictions on a strongly augmented example consistent with its prediction on the same example when weakly augmented. For details about the strong versus weak augmentations we use, see Appendix C. + +Formally, the pseudolabel-generating function $\psi$ is given by + +$$ +\tilde {y} _ {\mathrm {U}} = \psi \left(x _ {\mathrm {U}}\right) = \underset {y} {\arg \max } h \circ A _ {\text {w e a k}} \left(x _ {\mathrm {U}}\right) [ y ] \tag {18} +$$ + +Like Pseudo-Label, FixMatch uses confidence thresholding, and unlabeled examples on which the model has low confidence have zero loss. Following Sohn et al. (2020), we keep the balance between labeled and unlabeled losses constant at $\lambda(t) = \lambda$ . FixMatch's original authors justify keeping $\lambda(t)$ at a fixed magnitude (as opposed to slowly increasing $\lambda(t)$ as in Pseudo-Label) by noting that most predictions made by FixMatch are initially low confidence, so for sufficiently high confidence threshold $\tau$ , most unlabeled examples have loss zero, keeping the magnitude of $L_{\mathrm{U}}(h)$ initially small. This magnitude grows over time, providing a natural curriculum (Sohn et al., 2020). + +We endeavored to match our implementation of FixMatch to the formulation of Sohn et al. (2020), except in the use of augmentations for labeled data. Where we have strongly augmented all labeled examples using $A_{\mathrm{strong}}$ in Algorithm 4, Sohn et al. (2020) explicitly choose to use weak instead of + +strong augmentations on the labeled examples. However, our results on DomainNet in Appendix E suggest that using strong instead of weak augmentations for the labeled examples improves performance, so we use strong augmentations on the labeled examples in order to allow for a fairer comparison to other methods. + +# Algorithm 4: FixMatch + +Input: Labeled batch $\{(x_{\mathrm{L}}^{(i)},y_{\mathrm{L}}^{(i)},d_{\mathrm{L}}^{(i)}):i\in (1,\dots ,n_{\mathrm{L}})\}$ , unlabeled batch + +$\{(x_{\mathrm{U}}^{(i)},d_{\mathrm{U}}^{(i)}):i\in (1,\dots ,n_{\mathrm{U}})\}$ , weak augmentation function $A_{\mathrm{weak}}$ , strong augmentation function $A_{\mathrm{strong}}$ , unlabeled loss weight $\lambda \in \mathbb{R}$ , confidence threshold $\tau \in [0,1]$ + +1 Generate pseudolabels $\tilde{y}_{\mathrm{U}} = \arg \max_{y} h \circ A_{\mathrm{weak}}(x_{\mathrm{U}})[y]$ for the unlabeled data +2 Update model $h$ on loss + +$$ +\begin{array}{l} \frac {1}{n _ {\mathrm {L}}} \sum_ {i = 1} ^ {n _ {\mathrm {L}}} \ell \left(h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {L}} ^ {(i)}\right), y _ {\mathrm {L}} ^ {(i)}\right)) \\ + \frac {\lambda}{n _ {\mathrm {U}}} \sum_ {i = 1} ^ {n _ {\mathrm {U}}} \mathbf {1} \left\{\operatorname {S o f t m a x} \left(\max _ {y} h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {U}}\right) [ y ]\right) \geq \tau \right\} \cdot \ell \left(h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {U}}\right), \tilde {y} _ {\mathrm {U}}\right) \\ \end{array} +$$ + +APPLICABLE DATSETS. We evaluate FixMatch on image classification datasets iWILDCAM2020-WILDS, CAMELYON17-WILDS, POVERTYMAP-WILDS, and FMOW-WILDS. We do not evaluate FixMatch on other datasets because FixMatch relies on enforcing consistency across data augmentations, which we only define for image datasets (see Appendix C). + +Noisy Student. Algorithm 5 describes Noisy Student, proposed by Xie et al. (2020). Unlike Pseudo-Label and FixMatch, which update the model and re-generate new pseudolabels each batch, Noisy Student generates pseudolabels, fixes them, and then trains the model until convergence before generating new pseudolabels. First, an initial teacher model is trained on the labeled data; next, the teacher model pseudolabels the unlabeled data, and a student model is trained on the labeled and pseudolabeled data; finally, the student model becomes the new teacher, and the cycle repeats (see Algorithm 5). Each (teacher, student) pair is termed an iteration; we study the results of two iterations. + +We train Noisy Student using hard pseudolabels, which the teacher generates over weakly augmented inputs: + +$$ +\tilde {y} _ {\mathrm {U}} = \psi \left(x _ {\mathrm {U}}\right) = \underset {y} {\arg \max } f _ {\text {t e a c h e r}} \circ A _ {\text {w e a k}} \left(x _ {\mathrm {U}}\right) [ y ] \tag {19} +$$ + +While the teacher generates pseudolabels on a weakly augmented data, the student must make both labeled and unlabeled predictions on noisy (i.e., strongly augmented) data. Following Xie et al. (2020), we add a dropout layer $(p = 0.5)$ before the student's last layer, randomly corrupting final feature maps. Students thus are trained to be consistent across both data-based and model-based noise. We denote the model with inserted dropout as Dropout $\circ f$ . Xie et al. (2020) add even more model-based noise using stochastic depth; for simplicity, we do not use stochastic depth in our implementation. + +We follow the original paper and fix the balance between labeled and unlabeled losses as $\lambda(t) = 1$ . Noisy Student does not use confidence thresholding. + +Note that Xie et al. (2020) use both dropout and strong data augmentations when training the initial teacher on labeled data. We reuse models from our ERM + Data Augmentation experiments as initial teacher models; thus we differ from Xie et al. (2020) in that our initial teachers were trained with strong augmentations, but not dropout (see Algorithm 5). + +APPLICABLE DATASETS. We evaluate Noisy Student on all datasets except text datasets CIVIL COMMENTS-WILDS and AMAZON-WILDS. For GLOBALWHEAT-WILDS and OGB-MOLPCBA, we run Noisy Student without noise from data augmentations. + +# Algorithm 5: Noisy Student + +Input: Labeled dataset $\{(x_{\mathrm{L}},y_{\mathrm{L}},d_{\mathrm{L}})\}$ divided into batches of size $n_\mathrm{L}$ , unlabeled dataset $\{(x_{\mathrm{U}},d_{\mathrm{U}})\}$ divided into batches of size $n_\mathrm{U}$ , total number of iterations $S$ , weak augmentation function $A_{\mathrm{weak}}$ , strong augmentation function $A_{\mathrm{strong}}$ + +1 Train an initial teacher model $f^{[0]}$ to convergence on labeled examples using the following batch-wise objective + +$$ +\frac {1}{n _ {\mathrm {L}}} \sum_ {i = 1} ^ {n _ {\mathrm {L}}} \ell \left(h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {L}} ^ {(i)}, y _ {\mathrm {L}} ^ {(i)}\right)\right) +$$ + +for iteration $s\in (1,\dots ,S)$ do + +Generate fixed pseudolabels $\tilde{y}_{\mathrm{U}} = \arg \max_{y}f^{[s - 1]}\circ A_{\mathrm{weak}}(x_{\mathrm{U}})$ for the unlabeled data + +Train the next student model $f^{[s]}$ to convergence on unlabeled and labeled examples using the following batch-wise objective + +$$ +\frac {1}{n _ {\mathrm {L}}} \sum_ {i = 1} ^ {n _ {\mathrm {L}}} \ell \left(\operatorname {D r o p o u t} \circ h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {L}} ^ {(i)}\right), y _ {\mathrm {L}} ^ {(i)}\right) + \frac {1}{n _ {\mathrm {U}}} \sum_ {i = 1} ^ {n _ {\mathrm {U}}} \ell \left(\operatorname {D r o p o u t} \circ h \circ A _ {\text {s t r o n g}} \left(x _ {\mathrm {U}} ^ {(i)}\right), \tilde {y} _ {\mathrm {U}} ^ {(i)}\right) +$$ + +# B.4 SELF-SUPERVISION METHODS + +Self-supervised methods learn useful representations by training on unlabeled data via auxiliary "proxy" tasks. Common approaches include reconstruction tasks (Vincent et al., 2008; Erhan et al., 2010; Devlin et al., 2019; Gidaris et al., 2018; Lewis et al., 2020), which remove or corrupt a small part of each training example and use it as a prediction goal, and contrastive learning (He et al., 2020; Chen et al., 2020b; Caron et al., 2020; Radford et al., 2021b), which aims to learn a representation space such that similar example pairs stay close to each other while dissimilar ones are far apart. The underlying assumption is that feature encoders that solve the proxy tasks will also perform well on the downstream supervised task (Lee et al., 2020a; Wei et al., 2021). + +In our work, we consider two self-supervised methods: SwAV Caron et al. (2020) for images and masked language modeling (Devlin et al., 2019) for text. We use these methods to pre-train models on the unlabeled data. In all cases, we start with the same model initialization used for all of the other algorithms on that dataset; do additional pre-training via self-supervision on the unlabeled data; and then initialize a new classifier head and finetune the model via ERM with data augmentation. This follows the procedure in Shen et al. (2021). As a concrete example, for FMOW-WILDS, we use the following procedure to run our ERM experiments: + +1. Initialize a DenseNet-121 model (Huang et al., 2017) using ImageNet-pretrained weights. +2. Finetune the model on labeled data from the source domain. +3. Evaluate on held-out data from the target domain. + +For SwAV, we use the exact same procedure but with the addition of a second step: + +1. Initialize a DenseNet-121 model (Huang et al., 2017) using ImageNet-pretrained weights. +2. Continue pre-training the model with SwAV on unlabeled data from the target domain. +3. Finetune the model on labeled data from the source domain. +4. Evaluate on held-out data from the target domain. + +Similarly, for text datasets, we initialized pre-trained BERT models and then continued pre-training them using masked language modeling on the unlabeled data in WILDS 2.0. + +We tuned hyperparameters for finetuning, following the exact same procedure and hyperparameters as for ERM, but not for pre-training. + +SwAV. We directly use the public SwAV repository available at https://github.com/facebookresearch/swav. We keep almost all of the hyperparameters used by the original paper for 400 epoch training with batch size 256. However, we make the following changes based on issues and tips from the original authors in the SwAV repository: + +1. To stabilize training, we opt not to use a queue; this follows the suggestion in issue #69. +2. For each dataset, we set the number of prototypes to approximately 10x the number of classes; this follows the suggestion in issue #37. For POVERTYMAP-WILDS, which is a regression problem, we use 1000 prototypes, which displayed more stable training than 10 or 100 prototypes. +3. We set $\epsilon = 0.03$ to avoid representation collapse; this follows the suggestion in the Common Issues section of the repository'sREADME. +4. We set the base learning rate via the suggested "linear scaling" rule (issue #37). In other words, for total batch size (over GPUs) $\geq 512$ , the learning rate is scaled linearly. For smaller batch sizes ( $< 512$ ), we set the base learning rate at 0.6. We multiply the base learning rate by $1000 \times$ to obtain the final learning rate, since each of the base/final pairs that the paper reports differ by that factor. + +We set the maximum number of epochs to 400 but stop pre-training early when the loss does not decrease by more than $0.3\%$ for 5 consecutive epochs. + +APPLICABLE DATASETS. We evaluate SwAV on iWILDCAM2020-WILDS, CAMELYON17-WILDS, POVERTYMAP-WILDS, and FMOW-WILDS. We do not evaluate SwAV on other datasets because SwAV relies training with data augmentations, which we only define for image datasets (see Appendix C). + +Masked language modeling (MLM). MLM is a popular self-supervised objective for text data and is commonly used to pre-train model representations (Devlin et al., 2019). Given an unlabeled text corpus $\mathcal{X} = \{X\}_{i}$ (e.g., a set of comments for CivilComments; a set of reviews for Amazon), a training example $(x,y)$ can be generated by randomly masking tokens in each text piece $X$ (e.g., $x =$ "The [MASK] is the currency [MASK] the UK"; $y =$ ("pound", "of"). The model is trained to use its representation of the masked input $x$ to predict the original tokens $y$ that should go in each mask. The MLM objective encourages the model to learn syntactic and semantic knowledge (e.g., to predict "of") as well as world knowledge (e.g., to predict "pound") present in the text corpus (Guu et al., 2020). + +For our implementation, we use DistilBERT (Sanh et al., 2019) as our initial model and pre-train it with the MLM objective on the unlabeled data of each task (CivilComments, Amazon). Following the original BERT implementation (Devlin et al., 2019), we randomly mask $15\%$ of the tokens in each input text piece, of which $80\%$ are replaced with [MASK], $10\%$ are replaced with a random token (according to the unigram distribution), and $10\%$ are kept unchanged. + +APPLICABLE DATASETS. We evaluate masked language modeling on the text datasets CIVIL COMMENTS-WILDS and AMAZON-WILDS. + +# C DATA AUGMENTATION + +In this work, several methods we study leverage data augmentations to encourage generalization across domains. Below, we provide details on our implementations of these augmentations. + +Image classification (IWILDCAM2020-WILDS, CAMELYON17-WILDS, and FMOW-WILDS). We use a consistent set of data augmentations across image datasets IWILDCAM2020-WILDS, CAMELYON17-WILDS, and FMOW-WILDS. For methods other than SwAV, we define two strengths of data augmentations: a weak function $A_{\mathrm{weak}}$ and a strong function $A_{\mathrm{strong}}$ , and we specify both according to Sohn et al. (2020). The weak augmentation function $A_{\mathrm{weak}}$ is a random horizontal flip. The strong augmentation function $A_{\mathrm{strong}}$ is a composition of (i) random horizontal flip, (ii) RandAugment (Cubuk et al., 2020), and (iii) Cutout (DeVries & Taylor, 2017). For the exact implementation of RandAugment, we directly use the implementation of Zhang et al. (2021), which is based on the implementation used by Sohn et al. (2020). This implementation specifies a pool of operations and sample magnitudes for each operation uniformly across a pre-specified range. The pool of operations includes: autocontrast, brightness, color jitter, contrast, equalize, posterize, rotation, sharpness, horizontal and vertical shearing, solarize, and horizontal and vertical translations. We apply $N = 2$ random operations for all experiments (see Appendix D.4). + +The labeled loss for all methods, including finetuning models pre-trained with SwAV, uses this strong augmentation function. + +For SwAV pre-training, we use the data augmentation pipeline used in the original paper (Caron et al., 2020), which is almost identical to the strong data augmentation introduced in SimCLR (Chen et al., 2020a) but with different random crop scales to accommodate the several additional lower-resolution crops. For each image, the pipeline is the following sequence of random transformations: resized crop, horizontal flip, color jitter, grayscale, and Gaussian blur. + +POVERTYMAP-WILDS. As POVERTYMAP-WILDS is a dataset of multispectral images, we define a separate set of data augmentations. For methods other than SwAV, we define two strengths of data augmentations: a weak function $A_{\mathrm{weak}}$ and a strong function $A_{\mathrm{strong}}$ . The weak augmentation function $A_{\mathrm{weak}}$ is a random horizontal flip. The strong augmentation function $A_{\mathrm{strong}}$ is a composition of (i) random horizontal flip, (ii) random affine transformation, (iii) color jitter on the RGB channels, and (iv) Cutout on all channels (DeVries & Taylor, 2017). + +We use the same augmentations for SwAV pre-training as above for IWILDCAM2020-wILDS, CAMELYON17-wILDS, and FMOW-wILDS, but note that the color jitter module is applied only to the RGB channels. + +Other datasets. We do not define data augmentations for other datasets, i.e., GLOBALWHEAT-WILDS, OGB-MOLPCBA, CIVIL COMMENTS-WILDS, and AMAZON-WILDS. Although GLOBALWHEAT-WILDS is an image dataset and can be transformed using augmentations defined above, we omit data augmentations for simplicity, because such augmentations would generally require changing $y$ as well as $x$ (e.g., random translations on the input image also require translating the bounding box labels). For OGB-MOLPCBA, we omit augmentations because data augmentations on graphs are not well developed. CIVIL COMMENTS-WILDS and AMAZON-WILDS are text datasets; although data augmentations have been proposed for text datasets, we do not use these augmentations because training with augmentations is not as standard on text datasets as on image datasets. For these datasets, methods are benchmarked without augmentations, i.e. we substitute all occurrences of $A_{\mathrm{weak}}$ , $A_{\mathrm{strong}}$ with the identity. + +# D EXPERIMENTAL DETAILS + +# D.1 IN-DISTRIBUTION VS. OUT-OF-DISTRIBUTION PERFORMANCE + +We report both in-distribution and out-of-distribution performance metrics on all datasets, with the exception of OGB-MOLPCBA, which does not have a separate in-distribution test set. Using the terminology in WILDS (Koh et al., 2021), we consider the train-to-train in-distribution comparison on IWILDCAM2020-WILDS, CAMELYON17-WILDS, FMOW-WILDS, and POVERTYMAP-WILDS, and the average comparison on CIVIL COMMENTS-WILDS and AMAZON-WILDS. + +# D.2 MODEL ARCHITECTURES + +For all experiments, we use the same models for each dataset as in WILDS 1.0: + +- IWILDCAM2020-WILDS: ResNet-50 (He et al., 2016). +- CAMELYON17-WILDS: DenseNet-121 (Huang et al., 2017). +- FMOW-WILDS: DenseNet-121 (Huang et al., 2017). +- POVERTYMAP-WILDS: Multi-spectral ResNet-18 (Yeh et al., 2020). +- GLOBALWHEAT-WILDS: Faster-RCNN (Ren et al., 2015). +- OGB-MOLPCBA: Graph Isomorphism Network (Xu et al., 2018). +CIVILCOMMENTS-WILDS: DistilBERT (Sanh et al., 2019). +- AMAZON-WILDS: DistilBERT (Sanh et al., 2019). + +The models for IWILDCAM2020-WILDS, FMOW-WILDS, and GLOBALWHEAT-WILDS were initialized with weights pre-trained on ImageNet. Note that models for CAMELYON17-WILDS were not initialized with ImageNet weights. The DistilBERT models were also initialized with pre-trained weights from the Transformers library. + +# D.3 BATCH SIZES AND BATCH NORMALIZATION + +For each dataset, we set the total batch size (where a batch contains both labeled and unlabeled data) to the maximum that can fit on 12GB of GPU memory (Table 13). For all the methods that leverage unlabeled data, except the pre-training algorithms, we run with 4 steps of gradient accumulation, resulting in a $4 \times$ larger effective batch size. For SwAV pre-training, we run with 4 GPUs in parallel, which achieves a similar effect. For masked LM pre-training, we run with the default setting of 256 steps of gradient accumulation. These larger batch sizes deviate from the defaults used in the WILDS paper (Koh et al., 2021). We use these larger batch sizes because methods that leverage unlabeled data tend to use larger batch sizes (Sohn et al., 2020; Xie et al., 2020; Caron et al., 2020). + +
DatasetWILDS 1.0 batch sizeWILDS 2.0 batch size
CAMELYON17-WILDS32168
CIVIL COMMENTS-WILDS1648
FMOW-WILDS3272
POVERTYMAP-WILDS64120
AMAZON-WILDS824
IWILDCAM2020-WILDS1624
OGB-MOLPCBA324,096
GLOBALWHEAT-WILDS48
+ +Table 13: The batch sizes of each dataset from the original WILDS 1.0 paper and the batch sizes used in WILDS 2.0, which correspond to the maximum that can fit into 12GB of GPU memory. + +For models that use batch normalization, the composition of each batch affects the way in which batch normalization is applied. For CORAL, DANN, and Pseudo-Label, we concatenate the labeled and unlabeled data together in each batch, so the labeled and unlabeled data are jointly normalized. + +
Dataset \ # epochsERM3:1 ratio7:1 ratio15:1 ratio
IWILDCAM2020-WILDS12632
CAMELYON17-WILDS10532
FMOW-WILDS6030158
POVERTYMAP-WILDS150753819
GLOBALWHEAT-WILDS12632
OGB-MOLPCBA2001005025
CIVILCOMMENTS-WILDS5321
AMAZON-WILDS3211
+ +Table 14: The number of epochs (complete passes over the labeled data) used for each dataset, specified for the ERM baseline as well as different ratios of unlabeled to labeled data within a batch. + +For FixMatch, we jointly normalize the labeled data and the strongly augmented unlabeled data, but we separately normalize the weakly augmented unlabeled data in a separate forward pass; we did two forward passes to keep the overall batch sizes consistent with the other algorithms, as in Table 13, while still fitting in GPU memory. For Noisy Student, MLM pre-training, and SwAV pretraining, the unlabeled data is processed separately from the labeled data, so each batch of labeled or unlabeled data is separately normalized. + +# D.4 HYPERPARAMETER TUNING + +We tune each algorithm separately for each dataset by randomly sampling 10 different hyperparameter configurations within the ranges defined below. We early stop and select the best hyperparameters based on the OOD validation performance, which is computed on the labeled Validation (OOD) data for each dataset; we do not use the labeled Validation (ID) data in our experiments. We then run replicates using the best hyperparameters. For computational reasons, we do not tune hyperparameters for the pre-training algorithms, though we tune the finetuning of their resulting pre-trained models as usual. + +Learning rates. For all the datasets, except for OGB-MOLPCBA, we multiply the learning rates used in WILDS by the ratio of the effective batch size to the original batch size used in WILDS 1.0. We center the learning rate grid around this modified learning rate $r$ , and search over $r \cdot 10^{U(-1,1)}$ where $U$ is the uniform distribution. For OGB-MOLPCBA, we pick $r$ by multiplying the original learning rate by a factor of 10 instead of $4096 / 32 = 128$ (for ERM, which does not have grandient accumulation), because we found that the latter led to unstable optimization. + +$L_{2}$ -regularization. Across all datasets and methods, we used the same $L_{2}$ -regularization strengths used in WILDS 1.0. + +Ratio of unlabeled to labeled data in a batch. For all the domain-invariant and self-training methods, we search over the ratio of unlabeled to labeled data in a batch, using the values $\{3:1,7:1,15:1\}$ . + +Number of epochs. We defined an epoch as a complete pass over the labeled data. This means that the number of batches / gradient steps taken per epoch varies with the ratio of unlabeled to labeled data in a batch, as a higher ratio means that each batch contains fewer labeled examples. We adjusted the number of epochs accordingly so that the total amount of compute was similar regardless of the ratio of unlabeled to labeled data in a batch. We allocated roughly twice as much compute (i.e., processing twice as many batches) to methods that used unlabeled data, compared to the purely-supervised ERM baseline. Overall, we set the number of epochs based on the WILDS 1.0 defaults, with some upwards adjustments (due to the different batch sizes and the use of unlabeled data) if we found that the best hyperparameter configuration had not converged on the validation set. Table 14 shows the total number of epochs used per dataset. + +# D.5 ALGORITHM-SPECIFIC HYPERPARAMETERS + +We tuned the following algorithm-specific hyperparameters: + +CORAL. We searched over penalty weights $10^{U(-1,1)}$ . + +DANN. We searched over penalty weights $10^{U(-1,1)}$ and have separate learning rates for the featurizer, classifier and domain discriminator. We tuned the learning rate for the classifier and domain discriminator, then fixed the learning rate of the featurizer to be a tenth of the learning rate of the classifier. + +Pseudo-Label, FixMatch, and Noisy Student. We fixed the penalty weight to be 1. For FixMatch and Pseudo-Label, we searched over confidence thresholds $U(0.7, 0.95)$ . Noisy Student does not use a confidence threshold. + +SwAV. We did not tune SwAV hyperparameters. See Appendix B.4 for a description of the default hyperparameters used. + +Masked language modeling. We did not tune masked LM hyperparameters, opting instead to use default hyperparameters. For both CIVIL COMMENTS-WILDS and Amazon-WILDS, we pre-trained DistilBERT for 1,000 steps with a learning rate of $10^{-4}$ and a batch size of 8,192 sequences using gradient accumulation. Following WILDS defaults, for CivilComments, we set the max sequence length to be 300 and for Amazon, 512. We used FP16 training to speed up pretraining. + +# D.6 COMPUTE INFRASTRUCTURE + +We ran experiments on a mix of NVIDIA GPUs: V100, K80, GeForce RTX, Titan RTX, Titan Xp, and Titan V. SwAV pre-training took approximately 3 days $\times$ 4 V100 GPUs for each dataset, while masked LM pre-training took approximately 3 days for a single GPU for each dataset. The other algorithms took less than a day on a V100 to run. The runtime estimates in Section 6 are estimated for V100 GPUs. We used the Weights and Biases platform (Biewald, 2020) to monitor experiments. + +# EXPERIMENTS ON DOMAINNET + +Prior work has shown that domain-invariant, self-training, and self-supervised methods can perform well on standard benchmarks for unsupervised domain adaptation. In this section, we describe our experiments on DomainNet (Peng et al., 2019), a standard unsupervised domain adaptation benchmark for object recognition. Our goal was to verify that our training/tuning protocol and our implementations of the methods we benchmark in Section 6—which differ slightly from prior work in the ways described in Section B—still result in models that can perform well on DomainNet. Consistent with prior work, the methods we benchmark in Section 6, with the exception of CORAL, all improve over standard ERM training in our DomainNet experiments. + +# E.1 SETUP + +DomainNet is an object recognition dataset with approximately 600,000 images across six different domains: sketch, real, quickdraw, painting, infographic and clipart (Peng et al., 2019). Typically, one of these domains is selected as the source, and another domain as the target for evaluation. In our experiments, we use the real $\rightarrow$ sketch setting for two reasons: it is a common choice in prior work on DomainNet, and as our models are pre-trained on ImageNet (following (Zhang et al., 2021)), we wanted to use the real domain as the source to be consistent with the realistic photographs used for ImageNet pretraining. While it is common to evaluate methods on multiple pairs of source and target domains in DomainNet, in our experiments we only chose one pair, as our goal was only to verify consistency with prior results. + +Data. The DomainNet dataset includes train and test splits for each of the domains, with $70\% - 30\%$ split between train and test examples. The real domain has 172,947 images total: 120,906 images in the train split and 52,041 images in the test split. The target domain, sketch, has a total of 69,128 images: 48,212 in the train split and 20,916 images are in the test split. We used this data in the following way: + +1. For training, we used the source training examples (with labels) and the target training examples (without labels). +2. For validation, we used the same set of target training examples, but with labels; this overestimates performance in a true domain adaptation setting (where one would not have labeled target data), but it is a common practice in the literature, and we followed it for consistency with Jiang et al. (2020) and Zhang et al. (2021). +3. For evaluation, we used the source test examples as the in-distribution test set, and the target test examples as the out-of-distribution test set. + +Hyperparameters and other details. Other experimental details followed our main experiments in Section 6. We used the strong and weak data augmentation described for image classification in Appendix C. We set the total batch size to 96, which is the maximum that can fit on 12GB of GPU memory. We tuned hyperparameters with the protocol described in Section D.4. Specifically, for all methods, we fixed $L_{2}$ -regularization at $10^{-4}$ . We then randomly sampled learning rates from $10^{U(-4,-2)}$ to train the ERM with data augmentation model. For all other models, we took the best learning rate that we found for the ERM with data augmentation model and searched over one order of magnitude lower and higher from it. As in Zhang et al. (2021), we used a ResNet-50 model initialized by pretraining on ImageNet. For SwAV pre-training, instead of following the early-stopping procedure in Appendix B.4, we trained for the full 400 epochs used in Caron et al. (2020) since the experiment finished relatively quickly compared to the larger WILDS 2.0 datasets. + +# E.2 RESULTS + +Table 15 shows the results of our experiments on real $\rightarrow$ sketch. The use of (strong) data augmentation improved ERM performance from $34.9\%$ to $35.9\%$ . All unsupervised adaptation methods except CORAL improved over ERM. We also tested the use of strong vs. weak augmentation for labeled examples for both Pseudo-Label and FixMatch, and we found that using strong augmentation for the labeled examples improves performance. + +
In-distribution (real)Out-of-distribution (sketch)
ERM (-data aug)82.6 (0.0)34.9 (0.2)
ERM82.5 (0.3)35.9 (0.3)
CORAL79.1 (0.4)33.6 (0.6)
DANN77.8 (0.2)39.4 (0.8)
Pseudo-Label79.9 (0.2)36.1 (0.4)
Pseudo-Label (weak aug)79.9 (0.6)32.0 (0.8)
FixMatch80.8 (0.2)50.2 (0.4)
FixMatch (weak aug)80.1 (0.1)49.3 (0.2)
Noisy Student82.0 (0.3)39.7 (0.2)
SwAV79.0 (0.3)38.2 (0.4)
+ +Table 15: The in-distribution vs. out-of-distribution test performance of each method on DomainNet (real $\rightarrow$ sketch). We also included the results of applying weak instead of strong augmentation on labeled examples for Pseudo-Label and FixMatch. Parentheses show standard deviation across 3 replicates. + +For DANN, Pseudo-Label, and FixMatch, we compared our results against the results reported in Zhang et al. (2021). Performance was similar for DANN (ours, $39.4\%$ ; theirs, $40.0\%$ ). For FixMatch, our implementation performs better (ours, $50.2\%$ ; theirs, $45.3\%$ ); this is partially due to our use of strong instead of weak augmentation for the labeled data, which increases performance by $0.9\%$ . For Pseudo-Label, our implementation performs worse (ours, $36.1\%$ ; theirs, $40.6\%$ ), and we believe it is due to variation in hyperparameter tuning. + +For Noisy Student, Berthelot et al. (2021) reported significantly lower numbers (ours, $39.7\%$ ; theirs, $32.6\%$ ). However, this is expected as they trained their models from scratch, whereas we used ImageNet-pretrained models. + +We were unable to find comparable results in prior work for CORAL and SwAV pretraining on the real $\rightarrow$ sketch split. Prior work has shown that these methods can improve performance on other unsupervised adaptation datasets (Sun & Saenko, 2016; Shen et al., 2021). On our DomainNet experiments, we found that SwAV pre-training did improve performance over ERM, though CORAL did not (Table 15). + +# F FULLY-LABELED ERM EXPERIMENTAL DETAILS + +The self-training methods we evaluate in Section 5 generate a pseudolabel $\tilde{y}_{\mathrm{U}}$ for each unlabeled example $x_{\mathrm{U}}$ and then train on $(x_{\mathrm{U}},\tilde{y}_{\mathrm{U}})$ as if the pseudolabels were true labels. However, these pseudolabels may not be accurate. In this section, we describe how we ran fully-labeled ERM experiments using ground truth labels on the "unlabeled" data to establish informal upper bounds on how well we might expect a standard self-training approach to perform with perfect pseudolabel accuracy. + +For four of our datasets (AMAZON-WILDS, CIVIL COMMENTS-WILDS, iWILDCAM2020-WILDS, and FMOW-WILDS), we curated the "unlabeled" data by taking labeled examples and discarding the ground truth labels. For example, all 268,761 of the unlabeled target reviews in AMAZON-WILDS actually have associated star ratings; these are available in our data loaders, but in our main experiments we treat these reviews as unlabeled by not loading the star ratings. We evaluated models trained via empirical risk minimization (ERM) on the combination of the standard labeled training set and the unlabeled data with these hidden labels revealed. For example, in AMAZON-WILDS, we pool together the labeled source examples as well as the unlabeled target examples with ground truth labels, and evaluate ERM models trained on all of that data. As with all of the other experiments in this paper, we evaluate test performance for all datasets on the labeled target splits, so at no point are we training on our actual test examples. + +# F.1 HYPERPARAMETERS + +Pooling labeled and unlabeled data. For all datasets, we pooled labeled source examples with examples from the same "unlabeled" split as in our main experiments (Table 2). We computed gradients for labeled minibatches and unlabeled minibatches separately, which means that for models using batch normalization, the labeled and unlabeled data were normalized separately. However, we fixed the labeled to "unlabeled" batch size ratios to match the overall labeled to unlabeled dataset size ratio, so other than the batch normalization effects, the training procedure can be viewed as running ERM on the pooled labeled and "unlabeled" data. + +Number of epochs. With the exception of IWILDCAM2020-WILDS, detailed below, we followed the procedure in Appendix D.4 to adjust the number of epochs based on the labeled to unlabeled batch size ratios. This resulted in a similar amount of computation allocated to these fully-labeled ERM experiments as the other experiments in Table 2. + +Other details. Other experimental details were kept similar to the other experiments in the paper. Specifically, we tuned each experiment by randomly sampling 10 different hyperparameters within the ranges defined in Appendix D.4; the only hyperparameter we tuned in these experiments was the learning rate. We early stopped and selected the best hyperparameters based on the OOD validation performance, and then ran replicates using the best hyperparameters. We also used data augmentation for IWILDCAM2020-wILDS and FMOW-wILDS but not for AMAZON-wILDS and CIVIL COMMENTS-wILDS. + +# F.2 DATASET-SPECIFIC DETAILS + +AMAZON-WILDS. We matched the experiments in Table 2 by training on the unlabeled target data (268,761 examples). In addition, we ran a separate experiment where we trained on the unlabeled extra data instead of the unlabeled target data, as the former has $10 \times$ the number of examples (2,927,841 examples). However, this did not improve performance. Using the unlabeled target data, we obtained an average accuracy of $73.6 (\pm 0.1)$ and a 10th percentile accuracy of $56.4 (\pm 0.8)$ , whereas using the unlabeled extra data, we obtained an average accuracy of $73.1 (\pm 0.1)$ and a 10th percentile accuracy of $54.7 (\pm 0.0)$ . + +CIVIL COMMENTS-WILDS. We used the unlabeled extra split (1,551,515 examples). As in our other experiments on CIVIL COMMENTS-WILDS, we accounted for label imbalance by sampling class-balanced labeled and "unlabeled" batches during training. + +IWILDCAM2020-WILDS. We used the unlabeled extra split. Out of the 819,120 unlabeled extra examples, 108,452 examples have ground truth labels (animal species) that are not present in the labeled training and test sets, so we omitted those examples and trained on the remaining 710,668 examples. We found that we required twice as many epochs compared to the other unlabeled methods for the fully-labeled ERM training to converge, so we doubled the amount of compute allocated to the fully-labeled IWILDCAM2020-WILDS experiments. + +FMOW-wILDS. We used the unlabeled target split (173,208 examples). + +# G USING THE WILDS LIBRARY WITH UNLABELLED DATA + +We have extended the existing WILDS library (Koh et al., 2021) to add data loaders for each of the 8 datasets with unlabeled data. These data loaders are compatible with the WILDS 1.0 APIs, allowing the unlabeled data to be accessed in a similar way to the labeled data: + +```python +>>> from wilds import get_dataset +>>> from wilds.common.data_loaders import get_trainloader +>>> import torchvision.transformastransforms +# Load the labeled data +>>> dataset = get_dataset(dataset="fmow", download=True) +>>> labeled Subset = dataset.get Subset("train", transform=transforms.ToTensor()) +>>> dataloader = get_trainloader("standard", labeled Subset, batch_size=16) +# Load the unlabeled data +>>> dataset = get_dataset(dataset="fmow", unlabeled=True, download=True) +>>> unlabeled Subset = dataset.get Subset("test_unlabeled", transform=transforms.ToTensor()) +>>> unlabeled_dataloader = get_trainloader("standard", unlabeled Subset, batch_size=64) +# Train loop +>>> for labeled_batch, unlabeled_batch in zip(dataloader, unlabeled_dataloader): +... x, y, metadata = labeled_batch +... unlabeled_x, unlabeled_metadata = unlabeled_batch +... ... +``` + +Figure 3: Example of data loading for both labeled and unlabeled data. + +As in the existing WILDS library, data downloading is automated. In addition, we implemented CORAL, DANN, Pseudo-Label, FixMatch, and Noisy Student using the existing WILDS interfaces. This allows developers to easily extend these algorithms and evaluate them in a standardized way on all of the WILDS datasets with unlabeled data. The WILDS repository also contains scripts for masked language model pre-training and for SwAV pre-training, which uses a modified version of the public SwAV repository that can interface with the WILDS data loaders. \ No newline at end of file diff --git a/extendingthewildsbenchmarkforunsupervisedadaptation/images.zip b/extendingthewildsbenchmarkforunsupervisedadaptation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..985d79e87588a826c35b4a3f508fa5e2e2c8f2a8 --- /dev/null +++ b/extendingthewildsbenchmarkforunsupervisedadaptation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0de2c0b9543d681d3770cf408f06049c828c912b5db9293cc3569618c493c94c +size 1539259 diff --git a/extendingthewildsbenchmarkforunsupervisedadaptation/layout.json b/extendingthewildsbenchmarkforunsupervisedadaptation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4401f9d773545bd375840a4a3eb7e5ccb0587d89 --- /dev/null +++ b/extendingthewildsbenchmarkforunsupervisedadaptation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16e8a409d1abec43264d2ca3cfaa82cf0d0a6465a99bdcc24f79d20e2f9c35a8 +size 1363454 diff --git a/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/a740315e-d8d6-473c-93a4-f0cd00807ba2_content_list.json b/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/a740315e-d8d6-473c-93a4-f0cd00807ba2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..97921d29fe40f0ac0c28a9e7d42ec292d20b7ed5 --- /dev/null +++ b/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/a740315e-d8d6-473c-93a4-f0cd00807ba2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54537f4851c52c6e5372d5f16249608f39c3feff121f2eb0ed6d260ef131abe0 +size 112864 diff --git a/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/a740315e-d8d6-473c-93a4-f0cd00807ba2_model.json b/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/a740315e-d8d6-473c-93a4-f0cd00807ba2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..132c843b6c2bc1d85b233d8030d6200805c05a32 --- /dev/null +++ b/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/a740315e-d8d6-473c-93a4-f0cd00807ba2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4ed8a87ef61a4f2edf440d196fa75595618dca1a0c1e81c5bdbbe37e9fb6976 +size 143133 diff --git a/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/a740315e-d8d6-473c-93a4-f0cd00807ba2_origin.pdf b/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/a740315e-d8d6-473c-93a4-f0cd00807ba2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e680bb2e026e5eb4616309de4e08ba92de044092 --- /dev/null +++ b/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/a740315e-d8d6-473c-93a4-f0cd00807ba2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ccbb748f05390196b79509768aa119ec40759f3957c8d78721e0063ac0b69d72 +size 1676560 diff --git a/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/full.md b/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..72c74fc18117e1620bc73ebcd27fa4fd8ad7d9cb --- /dev/null +++ b/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/full.md @@ -0,0 +1,441 @@ +# F8NET: FIXED-POINT 8-BIT ONLY MULTIPLICATION FOR NETWORK QUANTIZATION + +Qing Jin $^{1,2*}$ Jian Ren $^{1}$ Richard Zhuang $^{1}$ Sumant Hanumante $^{1}$ Zhengang Li $^{2}$ + +Zhiyu Chen3 Yanzhi Wang2 Kaiyuan Yang3 Sergey Tulyakov1 + +$^{1}$ Snap Inc. $^{2}$ Northeastern University, USA $^{3}$ Rice University, USA + +# ABSTRACT + +Neural network quantization is a promising compression technique to reduce memory footprint and save energy consumption, potentially leading to real-time inference. However, there is a performance gap between quantized and full-precision models. To reduce it, existing quantization approaches require high-precision INT32 or full-precision multiplication during inference for scaling or dequantization. This introduces a noticeable cost in terms of memory, speed, and required energy. To tackle these issues, we present F8Net, a novel quantization framework consisting of only fixed-point 8-bit multiplication. To derive our method, we first discuss the advantages of fixed-point multiplication with different formats of fixed-point numbers and study the statistical behavior of the associated fixed-point numbers. Second, based on the statistical and algorithmic analysis, we apply different fixed-point formats for weights and activations of different layers. We introduce a novel algorithm to automatically determine the right format for each layer during training. Third, we analyze a previous quantization algorithm—parameterized clipping activation (PACT)—and reformulate it using fixed-point arithmetic. Finally, we unify the recently proposed method for quantization fine-tuning and our fixed-point approach to show the potential of our method. We verify F8Net on ImageNet for MobileNet V1/V2 and ResNet18/50. Our approach achieves comparable and better performance, when compared not only to existing quantization techniques with INT32 multiplication or floating-point arithmetic, but also to the full-precision counterparts, achieving state-of-the-art performance. + +# 1 INTRODUCTION + +Real-time inference on resource-constrained and efficiency-demanding platforms has long been desired and extensively studied in the last decades, resulting in significant improvement on the trade-off between efficiency and accuracy (Han et al., 2015; Liu et al., 2018; Mei et al., 2019; Tanaka et al., 2020; Ma et al., 2020; Mishra et al., 2020; Liang et al., 2021; Jin et al., 2021; Liu et al., 2021). As a model compression technique, quantization is promising compared to other methods, such as network pruning (Tanaka et al., 2020; Li et al., 2021; Ma et al., 2020; 2021a; Yuan et al., 2021) and slimming (Liu et al., 2017; 2018), as it achieves a large compression ratio (Krishnamoorthi, 2018; Nagel et al., 2021) and is computationally beneficial for integer-only hardware. The latter one is especially important because many hardwares (e.g., most brands of DSPs (Ho, 2015; QCOM, 2019)) only support integer or fixed-point arithmetic for accelerated implementation and cannot deploy models with floating-point operations. However, the drop in performance, such as classification accuracy, caused by quantization errors, restricts wide applications of such methods (Zhu et al., 2016). + +To address this challenge, many approaches have been proposed, which can be categorized into simulated quantization, integer-only quantization, and fixed-point quantization (Gholami et al., 2021). Fig. 1 shows a comparison between these implementations. For simulated quantization, previous works propose to use trainable clipping-levels (Choi et al., 2018), together with scaling techniques on activations (Jin et al., 2020b) and/or gradients (Esser et al., 2019), to facilitate training for the quantized models. However, some operations in these works, such as batch normalization (BN), + +![](images/d5b9db41140ea169214a2818a56998be9dcabf71c99303d56bc9c2488cabff5b.jpg) +(a) Full-precision. + +![](images/997c1ccc3668ce9b7ef804cb4faa60442e551b9ee87f9b929d3f4831fd5b2725.jpg) +(b) Simulated quant. + +![](images/24fcbff8fd7ab01ab678f54bdc29c1f9ad8ff88edb09e7dd422e184ae887c17b.jpg) +(c) Integer-only quant. +Figure 1: Inspired by Gholami et al. (2021), we show the comparison of full-precision model (presented in (a)) and different quantizations settings: (b) simulated quantization; (c) integer-only quantization; and (d) fixed-point quantization. Note the combination of last two operations in integer-only quantization is termed as dyadic scaling in literature (Yao et al., 2021). + +![](images/862f8c54278841cfd11821e93bd5a2b63bea6c5e7f4df122ab4de405241a457f.jpg) +(d) Fixed-point quant. + +are conducted with full-precision to stabilize training (Jin et al., 2020b; Esser et al., 2019), limiting the practical application of integer-only hardware. Meanwhile, integer-only quantization, where the model inference can be implemented with integer multiplication, addition, and bit shifting, has shown significant progress in recent studies (Jacob et al., 2018; Yao et al., 2021; Kim et al., 2021). Albeit floating-point operations are removed to enable models running on devices with limited support of operation types, INT32 multiplication is still required for these methods. On the other hand, fixed-point quantization, which also applies low-precision logic for arithmetic, does not require INT32 multiplication or integer division. For example, to replace multiplication by bit shifting, Jain et al. (2019) utilize trainable power-of-2 scale factors to quantize the model. + +In this work, we adopt fixed-point quantization. Our work differs from previous efforts (Jain et al., 2019) in three major aspects. First, to determine the minimum error quantization threshold, we conduct statistical analysis on fixed-point numbers. Second, we unify parameterized clipping activation (PACT) and fixed-point arithmetic to achieve high performance and high efficiency. Third, we discuss and propose quantization fine-tuning methods for different models. We dub our method as F8Net, as it consists in only Fixed-point 8-bit multiplication employed for Network quantization. We thoroughly study the problem with fixed-point numbers, where only INT8 multiplication is involved, without any INT32 multiplication, neither floating-point nor fixed-point types. Throughout this paper we focus on 8-bit quantization, the most widely supported case for different devices and is typically sufficient for efficiency and performance requirements. Our contribution can be elaborated as follows. + +- We show 8-bit fixed-point number is able to represent a wide range of values with negligible relative error, once the format is properly chosen (see Fig. 3 and Fig. 4). This critical characteristic enables fixed-point numbers a much stronger representative capability than integer values. +- We propose a method to determine the fixed-point format, also known as fractional length, for weights and activations using their variance. This is achieved by analyzing the statistical behaviors of fixed-point values of different formats, especially those quantized from random variables with normal distribution of different variances. The analysis reveals the relationship between relative quantization error and variance, which further helps us build an approximated formula to determine the fractional length from the variance. +- We develop a novel training algorithm for fixed-point models by unifying fixed-point quantization and PACT (Choi et al., 2018). Besides, we show the impact of fractional length sharing for residual blocks, which is also important to obtain good performance for quantized models. +- We validate our approach for various models, including MobileNet V1/V2 and ResNet18/50 on ImageNet for image classification, and demonstrate better performance than existing methods that resort to 32-bit multiplication. We also integrate the recent proposed fine-tuning method to train quantized models from pre-trained full-precision models with ours for further verification. + +# 2 RELATED WORK + +Quantization is one of the most widely-used techniques for neural network compression (Courbariaux et al., 2015; Han et al., 2015; Zhu et al., 2016; Zhou et al., 2016; Mishra et al., 2017; Park et al., + +2017; Banner et al., 2018), with two types of training strategies: Post-Training Quantization directly quantizes a pre-trained full-precision model (He & Cheng, 2018; Nagel et al., 2019; Fang et al., 2020a;b; Garg et al., 2021); Quantization-Aware Training uses training data to optimize quantized models for better performance (Gysel et al., 2018; Esser et al., 2019; Hubara et al., 2020; Tailor et al., 2020). In this work, we focus on the latter one, which is explored in several directions. One area uses uniform-precision quantization where the model shares the same precision (Zhou et al., 2018; Wang et al., 2018; Choukroun et al., 2019; Gong et al., 2019; Langroudi et al., 2019; Jin et al., 2020a; Bhalgat et al., 2020; Chen et al., 2020; Yang et al., 2020; Darvish Rouhani et al., 2020; Oh et al., 2021). Another direction studies mixed-precision that determines bit-width for each layer through search algorithms, aiming at better accuracy-efficiency trade-off (Dong et al., 2019; Wang et al., 2019; Habi et al., 2020; Fu et al., 2020; 2021; Yang & Jin, 2020; Zhao et al., 2021a;b; Ma et al., 2021b). There is also binarization network, which only applies 1-bit (Rastegari et al., 2016; Hubara et al., 2016; Cai et al., 2017; Bulat et al., 2020; Guo et al., 2021). Despite the fact that quantization helps reduce energy consumption and inference latency, it is usually accompanied by performance degradation. To alleviate this problem, several methods are proposed. + +One type of effort focuses on simulated quantization. The strategy is to leave some operations, e.g., BN, in full-precision for the stabilized training of quantized models (Choi et al., 2018; Esser et al., 2019; Jin et al., 2020b). Nevertheless, these methods limit the application of the quantized models on resource-demanding hardware, such as DSP, where full-precision arithmetic is not supported for accelerated computing (QCOM, 2019; Ho, 2015). To completely eliminate floating-point operations from the quantized model, integer-only quantization techniques emulate the full-precision multiplication by 32-bit integer multiplication followed by bit shifting (Jacob et al., 2018; Zhu et al., 2020; Wu et al., 2020; Yao et al., 2021; Kim et al., 2021). However, the calculation of INT32 multiplication in these works requires one more operation, which results in extra energy and higher latency (Gholami et al., 2021). In parallel, recent work (Jain et al., 2019) proposes to restrict all scaling factors as power-of-2 values for all weights and activations, which belongs to fixed-point quantization methods (Lin et al., 2016; Jain et al., 2019; Kim & Kim, 2021; Mitschke et al., 2019; Enderich et al., 2019b; Chen et al., 2017; Enderich et al., 2019a; Zhang et al., 2020; Goyal et al., 2021). This enables the model to only incorporate INT8 or even INT4 multiplications, followed by INT32 bit shifting. However, there still a lack of a thorough study of the benefits of using fixed-point arithmetic. Also, the power-of-2 scaling factors are directly determined from the training data without theoretical analysis and guidance. In this work, we give an extensive analysis, especially on the potential and theoretical principle of using fixed-point values for quantized models, and demonstrate that with proper analysis and design, a model quantized with only INT8 multiplication involved is able to achieve comparable and even better performance to the integer-only methods implemented with INT32 multiplication. + +# 3 ANALYSIS OF FIXED-POINT REPRESENTATION + +In this section, we first introduce the fixed-point multiplication (Smith et al., 1997; Tan & Jiang, 2018) and analyze the distribution of weight from different layers in a well-trained full-precision model (Sec. 3.1). We then investigate the statistical property of fixed-point numbers, and demonstrate the potential of approximating full-precision values by 8-bit fixed-point numbers with different formats (Sec. 3.2). After that, we study the relationship between standard deviation of random variables and the optimal fixed-point format with the smallest quantization error. Finally, we derive an approximated formula relating the standard deviation and fixed-point format, which is verified empirically and employed in our final algorithms (Sec. 3.3). + +# 3.1 ADVANTAGES OF FIXED-POINT ARITHMETIC + +Fixed-point number is characterized by its format, which includes both the word length indicating the whole bit-width of the number and the fractional length (FL) characterizing the range and resolution of the represented values (Smith et al., 1997). Fixed-point arithmetic—especially fixed-point multiplication—is widely utilized for applications in, e.g., digital signal processing (Smith et al., 1997; Tan & Jiang, 2018). Compared with integer or floating-point multiplication, fixed-point multiplication has two major characteristics: First, multiplying two fixed-point numbers is more efficient than multiplying two floating-point numbers, especially on resource-constrained devices such as DSP. Second, it is more powerful than its integer counterpart due to its versatility and the + +![](images/27ce592d53316d5bfb996896dd8f3e1fb7325a4d8fe08501c9220ea26c13061b.jpg) +(a) Weight range. + +![](images/5c4c657a44e7a2682fb3a371b4a6743f256e89ee41c7515af24d5d5af260f11f.jpg) +(b) Weight and activation fractional length. + +![](images/8fb8144c8a18c7bd3cb9a6ee32b0c2c98f73a7c3f4b948c6338073514b9d3ee3.jpg) +Figure 2: (a) Value range of effective weight (see Sec. 4.2) for a pre-trained full-precision (FP) model, and (b) fractional lengths of each layer for a well-trained fixed-point model for MobileNet V2. +(a) Signed quant. for Gaussian R.V. +Figure 3: Representing potential for 8-bit signed (a) and unsigned (b) fixed-point numbers with different formats. The figures plot the relationship between relative quantization error and the standard deviation for different fixed-point formats. Both are experimented on zero-mean Gaussian random variables (R.V.), with ReLU applied on (b). + +![](images/b025ef3a8747dc982d83195029caf684b021cb32b632b80e5f32592190177693.jpg) +(b) Unsigned quant. for Rectified Gaussian R.V. + +representative ability of fixed-point numbers (there can be tens of different implementations for fixed-point multiplication but only one for integer and floating-point ones (Smith et al., 1997)). This efficiency and versatility make fixed-point quantization a more appealing solution than integer-only quantization. Specifically, as shown in Fig. 2a, the scales of weights from different layers in a pre-trained full-precision model can vary in orders, ranging from less than 0.1 to nearly 4. Direct quantization with only integers inevitably introduces considerable quantization error, unless more precision and more operations are involved, such as using INT32 multiplication together with bit shifting for scaling as shown in Fig. 1c. On the other hand, employing fixed-point numbers has the potential to reduce quantization error without relying on high-precision multiplication, as weights and activations from different layers have the extra degree of using different formats during quantization. Indeed, as shown in Fig. 2b for a well-trained MobileNet V2 with 8-bit fixed-point numbers, the fractional lengths for weights and activations vary from layer to layer. This raises the question of how to determine the formats for each layer. In the following, we study this for 8-bit fixed-point models. + +# 3.2 STATISTICAL ANALYSIS FOR FIXED-POINT FORMAT + +For a predefined bit-width, integer, which is a special case of fixed-point numbers with zero fractional length, has a predefined set of values that it can take, which severely constrains the potential of integer-only quantization. On the other hand, fixed-point numbers, with an extra degree of freedom, i.e., the fractional length, are able to represent a much wider range of full-precision values by selecting the proper format, and thus they are more suitable for quantization. As an example, Fig. 3 shows the relative quantization error with 8-bit fixed-point values using different formats for a set of random variables, which are sampled from normal distributions (both signed and unsigned, with the latter processed by ReLU before quantization) with zero-mean and different standard deviations $\sigma$ (more experimental details in Appx. 7.2). From the experiments, we make the following two observations. + +Observation 1: Fixed-point numbers with different formats have different optimal representing regions, and the minimum relative error and optimal standard deviation (annotated as a star) varies + +![](images/011ad904f8ee8bded63871f5d444f4f57908ebb57af635dfa95cc832c41a477d.jpg) +(a) Optimal fractional length and minimum relative error for signed quant. + +![](images/2cae6663a571802b1b811705a380948f93c375ebc30b0d4d859c5a06df41add4.jpg) +(b) Relationship between threshold standard deviation and fractional length for signed quant. + +![](images/9b43dabefda382e59f73ec275dac23ea773e47b90848fadd54ef5d0e0aecb0fc.jpg) +(c) Optimal fractional length and minimum relative error for unsigned quant. + +![](images/6c1d7989d63f216d360519d8621c726d1a534aacb4f297d9efe6a7f64894b432.jpg) +(d) Relationship between threshold standard deviation and fractional length for unsigned quant. +Figure 4: Determining optimal fractional length from standard deviation. (a) and (c) illustrate optimal fractional length and minimum relative quantization error against standard deviation for signed and unsigned 8-bit fixed-point quantization for Gaussian and rectified Gaussian random variables. (b) and (d) show the relationship between threshold standard deviation and fractional length. + +for different fractional lengths (Fig. 3). This is because the format controls the value magnitude and the representation resolution (the least significant bit). + +Observation 2: Larger fractional lengths are more robust to represent smaller numbers, while smaller fractional lengths are more suitable for larger ones. For a given standard deviation, using small fractional length has the risk of underflow, while large fractional length might cause overflow issue. Specifically, integers (black curves in Fig. 4) are much more prone to underflow issues and have large relative errors for small enough values to quantize. + +# 3.3 CHOOSING OPTIMAL FIXED-POINT FORMAT + +With the above observations, we are interested in answering two questions: + +(1) Can we achieve a small fixed-point quantization error for a wide range of full-precision values by always using the optimal fractional length corresponding to the smallest relative error? + +To answer this, we first plot the smallest possible relative error amongst all the candidate fixed-point formats against the standard deviation. As shown in red lines from Fig. 4a and Fig. 4c, for zero-mean normal distribution, by always choosing the optimal fixed-point format, we are able to achieve a relative quantization error smaller than $1\%$ for standard deviation with a range of order of at least around 3. For example, for signed quantization, the standard deviation can range from 0.1 to around 40 to achieve less than $1\%$ error, and for unsigned quantization, the standard deviation can range from 0.1 to 100. The experiments verify our presumption that using fixed-point values with the optimal formats is able to achieve negligible quantization error. + +(2) Can we have a simple way to determine the optimal fractional length? + +To answer this, we plot the optimal fractional length from the statistics of the full-precision values against the standard deviation, as shown in the blue lines in Fig. 4a and Fig. 4c. We find that the + +threshold $\sigma$ value corresponding to the jumping point is almost equidistant on the log scale of the standard deviation. This is expected as the representing region of different formats are differed by a factor of 2's exponents. Plotting the threshold standard deviation (on a log-scale) against the corresponding optimal fractional length (Fig. 4b and Fig. 4d), we find their relationship is almost linear, leading to the following semi-empirical approximating formulas to determine the optimal fractional length $\mathrm{FL}^*$ from the standard deviation (more discussion in Appendix 7.7): + +$$ +\text {S i g n e d}: \quad \mathrm {F L} ^ {*} \approx \left\lfloor \log_ {2} \frac {4 0}{\sigma} \right\rfloor , \quad \text {U n s i g n e d}: \quad \mathrm {F L} ^ {*} \approx \left\lfloor \log_ {2} \frac {7 0}{\sigma} \right\rfloor . \tag {1} +$$ + +In the following, unless specifically stated, we use (1) to determine the fractional length for both weight and activation quantization. Note that we only calculate the standard deviation during training. + +# 4 METHODS + +In this section, we discuss our proposed training technique for neural network quantization with fixed-point numbers, where the formats of weights and activations in each layer are determined based on (1) during training. We first analyze how to unify PACT and fixed-point quantization (Sec. 4.1). Then we show how to quantize weights and activations, especially updating for BN running statistics and fractional lengths (Sec. 4.2). Finally, we discuss the necessity of relating scaling factors from two adjacent layers to calculate the effective weights for quantization, especially for residual blocks where some layers have several layers following them (Sec. 4.3). + +# 4.1 UNIFYING PACT AND FIXED-POINT QUANTIZATION + +To quantize a positive value $x$ with unsigned fixed-point number of format (WL, FL), where WL and FL denotes word length and fractional length for the fixed-point number, respectively, we have the quantization function fix_quant as: + +$$ +\operatorname {f i x \_ q u a n t} (x) = \frac {1}{2 ^ {\mathrm {F L}}} \text {r o u n d} \left(\operatorname {c l i p} \left(x \cdot 2 ^ {\mathrm {F L}}, 0, 2 ^ {\mathrm {W L}} - 1\right)\right), \tag {2} +$$ + +where clip is the clipping function, and $0 \leq \mathrm{FL} \leq \mathrm{WL}$ for unsigned fixed-point numbers. Note that fixed-point quantization has two limitations: overflow, which is caused by clipping into its representing region, and underflow, which is introduced by the rounding function. Both of these introduce approximation errors. To minimize the error, we determine the optimal fractional length for each layer based on the analysis in Sec. 3.3. + +To achieve a better way to quantize a model using fixed-point numbers, we take a look at one of the most successful quantization techniques, PACT (Choi et al., 2018), which clips on the full-precision value with a learned clipping-level $\alpha$ before quantization: + +$$ +\operatorname {P A C T} (x) = \frac {\alpha}{M} \operatorname {r o u n d} \left(\frac {M}{\alpha} \operatorname {c l i p} (x, 0, \alpha)\right), \tag {3} +$$ + +where $M$ is a pre-defined scale factor mapping the value from $[0,1]$ to $[0,M]$ . The formal similarity between (2) and (3) inspires us to relate them with each other as (more details in the Appx. 7.3): + +$$ +\operatorname {P A C T} (x) = \frac {2 ^ {\mathrm {F L}} \alpha}{2 ^ {\mathrm {W L}} - 1} \text {f i x . q u a n t} \left(\frac {2 ^ {\mathrm {W L}} - 1}{2 ^ {\mathrm {F L}} \alpha} x\right), \tag {4} +$$ + +where we have set $M = 2^{\mathrm{WL}} - 1$ , which is the typical setting. With this relationship, we can implement PACT and train the clipping-level $\alpha$ implicitly with fixed-point quantization. + +# 4.2 UPDATING BN AND FRACTIONAL LENGTH + +Double Forward for BN Fusion. To quantize the whole model with only 8-bit fixed-point multiplication involved, we need to tackle the scaling factor from BN layer, including both the weight and running variance. Specifically, we need to quantize the effective weight that fuses the weight of convolution layers with the weight and running variance from BN (Jacob et al., 2018; Yao et al., 2021). This raises the question of how to determine the running statistics during training. To solve this problem, we apply forward computation twice. For the first forward, we apply the convolution + +using quantized input yet full-precision weight of the convolution layer, and use the output to update the running statistics of BN. In this way, the effective weight to quantize is available. Note there is no backpropagation for this step. For the second forward, we quantize the combined effective weight to get the final output of the two layers of convolution and BN and do the backpropagation. + +Updating Fractional Length. Different from existing work that directly trains the fractional length (Jain et al., 2019), we define the fractional length for weight on-the-fly during training by inferring from current value of weight, using (1). For the fractional length of activation, we use a buffer to store and update the value with a momentum of 0.1, similar to how to update BN running statistics. Once the fractional lengths are determined after training, we keep them fixed for inference. + +# 4.3 RELATING SCALING FACTORS BETWEEN ADJACENT LAYERS + +As shown in (4), there are still two extra factors during the quantization operation, which we denote as a fix scaling factor $\eta_{\mathrm{fix}}$ : + +$$ +\eta_ {\text {f i x}} = \frac {2 ^ {\mathrm {F L}} \alpha}{2 ^ {\mathrm {W L}} - 1}. \tag {5} +$$ + +Now $\alpha$ is a trainable parameter with full-precision, which means the fix scaling factor is also in full-precision. To eliminate undesired extra computation, we absorb it into the above-mentioned effective weights for quantization (Sec. 4.2). However, the fix scaling factor occurs twice, one for rescaling after quantization $(\eta_{\mathrm{fix}})$ and the other for scaling before quantization $(1 / \eta_{\mathrm{fix}})$ . To completely absorb it, we need to relate two adjacent layers. In fact, for a mapping that includes convolution, BN, and ReLU (more details are shown in Appx. 7.5), we apply PACT quantization to relate the activation between two adjacent layers as: + +$$ +q _ {i} ^ {(l + 1)} = \text {f i x - q u a n t} \left(\underbrace {\sum_ {j = 1} ^ {n (l)} \underbrace {\frac {\gamma_ {i} ^ {(l)}}{\sigma_ {i} ^ {(l)}} \frac {\eta_ {\text {f i x}} ^ {(l)}}{\eta_ {\text {f i x}} ^ {(l + 1)}} W _ {i j} ^ {(l)}} _ {\text {E f f e c t i v e W e i g h t}} q _ {j} ^ {(l)}} _ {\text {E f f e c t i v e B i a s}} + \underbrace {\frac {1}{\eta_ {\text {f i x}} ^ {(l + 1)}} \left(\beta_ {i} ^ {(l)} - \frac {\gamma_ {i} ^ {(l)}}{\sigma_ {i} ^ {(l)}} \mu_ {i} ^ {(l)}\right)} _ {\text {E f f e c t i v e B i a s}}\right), \tag {6} +$$ + +where $q$ is the fixed-point activation, $W$ the full-precision weight of the convolution layer, $i$ and $j$ the spatial indices, $n$ the total number of multiplication, and the superscript $(l)$ indicates the $l$ -th block consisting of convolution and BN. $\gamma, \beta, \sigma, \mu$ are the learned weight, bias, running standard deviation, and running mean for the BN layer, respectively. Also, we set $\mathrm{WL} = 8$ for all layers. As can be seen from (6), to obtain the final effective weight for fixed-point quantization, for the $l$ -th Conv-BN block, we need to access the fix scaling factor, or equivalently, the clipping-level $\alpha$ and the activation fractional length FL, from its following $(l + 1)$ -th block(s). To achieve this, we apply two techniques. + +Pre-estimating Fractional Length. As mentioned above, we determine the activation fractional length from its standard deviation. Also, (5) indicates that the fix scaling factor relies on such fractional length for each layer. However, in (6), we need the fix scaling factor from the next layer to determine the effective weight under quantization, which we have not yet updated. Thus, when calculating the effective weights during training, we use the activation fractional length stored in the buffer, instead of the one for quantizing the input of the next layer. + +Clipping-Level Sharing. As shown in Fig. 5, for residual blocks, some layers have two following layers (which we also name as child layer). Since we need the fix scaling factor from the child layer to calculate the effective weight for the parent (see (6)), inconsistent fix scaling factors between all children layers will be a problem. To this end, we define one layer as master and force all its siblings to share its clipping-level. In fact, the best way is to share both the clipping-level and the fractional length among siblings, but we find sharing fractional length leads to considerable performance drop, especially for deep models such as MobileNet V2 and ResNet50. This is because the fractional lengths play two roles here: one is for the fix scaling factor, and the other is for the representing region (or equivalently the clipping-level). Using different fractional lengths effectively enables different clipping-levels (although only differ by a factor of power-of-2, see Appx. 7.6), which can be beneficial because the activation scales might vary from layer to layer. Moreover, breaking the constraint of sharing activation fractional length does not introduce much computational cost, as the value only differs in storing format, and typically the values are stored in 32-bit, i.e., the accumulation results are only quantized into 8-bit for multiplication. Note that when computing the effective weight + +![](images/7151153050a69786b89b954384249c0262944ffcbdb79af4867dd8a8cb17f1eb.jpg) +(a) ResBlock with direct connection. + +![](images/e95ac6375975e64ae35f4665bb0a4642ba5d639f40fcd6d5d874d60df002aaab.jpg) +(b) ResBlock with downsampling. +Figure 5: The illustration of residual connections. For a layer with several layers (named children layers) directly following it, we choose one to be master, and all its sibling layers use the master layer's clipping level. On the other hand, since using different fractional length only cause bit shifting or different fixed-point quantization formats, and the values are stored in 32-bit before quantized into 8-bit, we do not share the fractional formats to allow more degrees of freedom. The two figures show the case of direct residual connection (a) and that with downsampling convolution layer (b). + +of the parent layer, we only use the master child's activation fractional length. For effective weight of each child layer and fixed-point quantization on its input, we use its own fractional length. + +# 5 EXPERIMENTS + +In this section, we present our results for various models on ImageNet (Deng et al., 2009) for classification task and compare the results with previous works that focus on quantization-aware training to verify the effectiveness of our method. We show the results for two sets of training. First, we discuss the conventional training method following Jin et al. (2020b). Second, we unify our method with one recent fine-tuning method that quantizes full-precision models with high accuracy (Yao et al., 2021). More detailed experimental settings are described in Appx. 7.1. + +Conventional training. We first apply our method using conventional training (Choi et al., 2018; Esser et al., 2019; Jin et al., 2020b; Fu et al., 2021), where the quantized model is trained with the simplest setting as those for full-precision model (more details in Appx. 7.1). To verify the effectiveness of our method, we perform experiments on several models including ResNet18 and MobileNet V1/V2. As shown in Table 1, our method achieves the state-of-the-art results for all models. Additionally, we obtain comparable or even better performance than the full-precision counterparts. + +Compared with previous works on simulated quantization (Choi et al., 2018; Park et al., 2018; Esser et al., 2019; Jin et al., 2020b; Fu et al., 2021) that requires full-precision rescaling after INT8 convolution, our approach is not only more efficient but also achieves better performance. On the other hand, compared with previous fixed-point quantization (Jain et al., 2019), our approach gives better results. This might partially due to that our method is based on a more systematic analysis, as explained above in Section 3.3. + +Table 1: 8-bit quantization with conventional training for ResNet18 and MobileNet V1/V2b. Following Yao et al. (2021), we abbreviate Integer-Only Quantization as "Int", INT8-Multiplication-Only Quantization as "8-bit", the Baseline Accuracy as "BL", and Top-1 Accuracy as "Top-1". All models are for 8-bit weight and activation quantization. For MobileNet V2, we are using MobileNet V2b version as it is the most typical one. + +(a) ResNet18 + +
MethodInt8-bitBLTop-1
Baseline (FP)70.370.3
RVQuant (Park et al., 2018)69.970.0
PACT (Choi et al., 2018)70.269.8
LSQ (Esser et al., 2019)70.571.1
CPT (Fu et al., 2021)-69.6
F8Net (ours)70.371.1
+ +(b) MobileNet V1 + +
MethodInt8-bitBLTop-1
Baseline (FP)72.472.4
PACT (Choi et al., 2018)72.171.3
TQT (Jain et al., 2019)71.171.1
SAT (Jin et al., 2020b)71.772.6
F8Net (ours)72.472.8
+ +(c) MobileNet V2b + +
MethodInt8-bitBLTop-1
Baseline (FP)72.772.7
PACT (Choi et al., 2018)72.171.7
TQT (Jain et al., 2019)71.771.8
SAT (Jin et al., 2020b)71.872.5
F8Net (ours)72.772.6
+ +To further understand the significance of our method, we plot the fractional lengths for weight and activation for each layer. Illustrated in Fig. 2b for MobileNet V2, we find that the fractional lengths for both weight and activation vary from layer to layer. Specifically, for weight quantization, since + +some layers have relatively large value range of effective weight, especially some depthwise layers, small fractional length is necessary to avoid overflow issue. On the other hand, for layers with small weight scale, large fractional length has more advantages to overcome the underflow problem. The same conclusion also applies for the fractional length for activation. Indeed, for some early layers in front of depthwise convolution layer, the activation fractional length needs to be small, yet for the later-stages, larger fractional length is desired. This further verifies our finding that using different fractional lengths for layers with the same parent is critical for good performance, because layers at different depths might be siblings and requires different fractional lengths (see Fig. 5). + +Tiny fine-tuning on full-precision model. Recent work (Yao et al., 2021) focus on investigating the potential of neural network quantization. To this end, they suggest to tiny fine-tune on a well-pretrained full-precision model with high accuracy. In this way, it might help to avoid misleading conclusion coming from improper comparison between weak full-precision models with strong quantized model. To further investigate the power of our method and compare it with these advanced techniques, we also apply our method and fine-tune on several full-precision models with high accuracy. Also, given the number of total fine-tuing steps is very small, we apply grid search to determine the optimal fractional lengths for this experiment. The results are listed in Table 2, and we can find that our method is able to achieve better performance than previous method (Yao et al., 2021), without time- and energy-consuming high-precision multiplication (namely dyadic scaling shown in Fig. 1c). + +Our method reveals that the high-precision rescaling, no matter implemented in full-precision, or approximated or quantized with INT32 multiplication followed by bit-shifting (a.k.a. dyadic multiplication), is indeed un + +Table 2: 8-bit quantization with tiny fine-tuning on well-trained full-precision model. Following Yao et al. (2021), we abbreviate Integer-Only Quantization as "Int", INT8-Multiplication-Only Quantization as "8-bit", Layer-Wise Quantization as "Layer", the baseline accuracy as "BL", Top-1 Accuracy as "Top-1", and Top-1 Accuracy Drop with respect to the baseline as "Drop". We use two baselines for ResNet50, one from PytorchCV (Sémery, 2021) (Baseline #1) and another from Nvidia (Nvidia, 2021) (Baseline #2), and we use ResNet50b version. Note that the OMPQ (Ma et al., 2021b) is mixed-precision quantization. + +(a) ResNet18 + +
MethodInt8-bitLayerBLTop-1Drop
Baseline (FP)XX-71.571.5-
HAWQ-V3 (Yao et al., 2021)XX71.571.60.1
HAWQ-V3 (Yao et al., 2021)X71.570.9-0.6
OMPQ (Ma et al., 2021b)XX73.172.3-0.8
F8Net (ours)73.172.4-0.7
+ +(b) ResNet50b + +
MethodInt8-bitLayerBLTop-1Drop
Baseline #1 (FP)XX-77.677.6-
HAWQ-V3 (Yao et al., 2021)XX77.677.5-0.1
HAWQ-V3 (Yao et al., 2021)X77.677.1-0.5
F8Net (ours)77.677.60.0
Baseline #2 (FP)XX-78.578.5-
HAWQ-V3 (Yao et al., 2021)XX78.578.1-0.4
HAWQ-V3 (Yao et al., 2021)X78.576.7-1.8
F8Net (ours)78.578.1-0.4
+ +necessary and is not the key part for quantized model to have good performance. This is not well-understood in previous literature. Specifically, we demonstrate that by properly choosing the formats for weight and activation in each layer, we are able to achieve comparable and even better performance with 8-bit fixed-point numbers, which can be implemented more efficiently on specific hardwares such as DSP that only supports integer operation. + +# 6 CONCLUSION + +Previous works on neural network quantization typically rely on 32-bit multiplication, either in full-precision or with INT32 multiplication followed by bit-shifting (termed dyadic multiplication). This raises the question of whether high-precision multiplication is critical to guarantee high-performance for quantized models, or whether it is possible to eliminate it to save cost. In this work, we study the opportunities and challenges of quantizing neural networks with 8-bit only fixed-point multiplication, via thorough statistical analysis and novel algorithm design. We validate our method on ResNet18/50 and MobileNet V1/V2 on ImageNet classification. With our method, we achieve the state-of-the-art performance without 32-bit multiplication, and the quantized model is able to achieve comparable or even better performance than their full-precision counterparts. Our method demonstrates that high-precision multiplication, implemented with either floating-point or dyadic scaling, is not necessary for model quantization to achieve good performance. One future direction is to perform an in-depth statistical analysis of fixed-point numbers with smaller word-lengths for neural network quantization. + +# REFERENCES + +Ron Banner, Yury Nahshan, Elad Hoffer, and Daniel Soudry. Post-training 4-bit quantization of convolution networks for rapid-deployment. arXiv preprint arXiv:1810.05723, 2018. +Yash Bhalgat, Jinwon Lee, Markus Nagel, Tijmen Blankevoort, and Nojun Kwak. Lsq+: Improving low-bit quantization through learnable offsets and better initialization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 696-697, 2020. +Adrian Bulat, Brais Martinez, and Georgios Tzimiropoulos. High-capacity expert binary networks. arXiv preprint arXiv:2010.03558, 2020. +Zhaowei Cai, Xiaodong He, Jian Sun, and Nuno Vasconcelos. Deep learning with low precision by half-wave gaussian quantization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5918-5926, 2017. +Jianfei Chen, Yu Gai, Zhewei Yao, Michael W Mahoney, and Joseph E Gonzalez. A statistical framework for low-bitwidth training of deep neural networks. arXiv preprint arXiv:2010.14298, 2020. +Xi Chen, Xiaolin Hu, Hucheng Zhou, and Ningyi Xu. Fxpnet: Training a deep convolutional neural network in fixed-point representation. In 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2494-2501. IEEE, 2017. +Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018. +Yoni Choukroun, Eli Kravchik, Fan Yang, and Pavel Kisilev. Low-bit quantization of neural networks for efficient inference. In ICCV Workshops, pp. 3009-3018, 2019. +Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in neural information processing systems, pp. 3123-3131, 2015. +Bita Darvish Rouhani, Daniel Lo, Ritchie Zhao, Ming Liu, Jeremy Fowers, Kalin Ovtcharov, Anna Vinogradsky, Sarah Massengill, Lita Yang, Ray Bittner, et al. Pushing the limits of narrow precision inferencing at cloud scale with microsoft floating point. Advances in Neural Information Processing Systems, 33, 2020. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009. +Zhen Dong, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Hawq: Hessian aware quantization of neural networks with mixed-precision. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 293–302, 2019. +Lukas Enderich, Fabian Timm, Lars Rosenbaum, and Wolfram Burgard. Fix-net: pure fixed-point representation of deep neural networks. 2019a. +Lukas Enderich, Fabian Timm, Lars Rosenbaum, and Wolfram Burgard. Learning multimodal fixed-point weights using gradient descent. arXiv preprint arXiv:1907.07220, 2019b. +Steven K Esser, Jeffrey L McKinstry, Deepika Bablani, Rathinakumar Appuswamy, and Dharmendra S Modha. Learned step size quantization. arXiv preprint arXiv:1902.08153, 2019. +Jun Fang, Ali Shafiee, Hamzah Abdel-Aziz, David Thorsley, Georgios Georgiadis, and Joseph Hassoun. Near-lossless post-training quantization of deep neural networks via a piecewise linear approximation. arXiv preprint arXiv:2002.00104, pp. 4, 2020a. +Jun Fang, Ali Shafiee, Hamzah Abdel-Aziz, David Thorsley, Georgios Georgiadis, and Joseph H Hassoun. Post-training piecewise linear quantization for deep neural networks. In European Conference on Computer Vision, pp. 69-86. Springer, 2020b. + +Yonggan Fu, Haoran You, Yang Zhao, Yue Wang, Chaojian Li, Kailash Gopalakrishnan, Zhangyang Wang, and Yingyan Lin. Fractrain: Fractionally squeezing bit savings both temporally and spatially for efficient dnn training. arXiv preprint arXiv:2012.13113, 2020. +Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, and Yingyan Lin. Cpt: Efficient deep neural network training via cyclic precision. arXiv preprint arXiv:2101.09868, 2021. +Sahaj Garg, Joe Lou, Anirudh Jain, and Mitchell Nahmias. Dynamic precision analog computing for neural networks. arXiv preprint arXiv:2102.06365, 2021. +Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W Mahoney, and Kurt Keutzer. A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630, 2021. +Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4852–4861, 2019. +Priya Goyal, Piotr Dálár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. +Rishabh Goyal, Joaquin Vanschoren, Victor Van Acht, and Stephan Nijssen. Fixed-point quantization of convolutional neural networks for quantized inference on embedded platforms. arXiv preprint arXiv:2102.02147, 2021. +Nianhui Guo, Joseph Bethge, Haojin Yang, Kai Zhong, Xuefei Ning, Christoph Meinel, and Yu Wang. *Boolnet: Minimizing the energy consumption of binary neural networks.* arXiv preprint arXiv:2106.06991, 2021. +Philipp Gysel, Jon Pimentel, Mohammad Motamedi, and Soheil Ghiasi. Ristretto: A framework for empirical study of resource-efficient inference in convolutional neural networks. IEEE transactions on neural networks and learning systems, 29(11):5784-5789, 2018. +Hai Victor Habi, Roy H Jennings, and Arnon Netzer. Hmq: Hardware friendly mixed precision quantization block for cnns. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXVI 16, pp. 448-463. Springer, 2020. +Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. +Xiangyu He and Jian Cheng. Learning compression from limited unlabeled data. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 752-769, 2018. +Joshua Ho. Qualcomm details hexagon 680 tsp in snapdragon 820 accelerated imaging, 2015. URL https://www.anandtech.com/show/9552/qualcomm-details-hexagon-680-dsp-in-snapdragon-820-accelerated-imaging. +Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. +Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. Advances in neural information processing systems, 29, 2016. +Itay Hubara, Yury Nahshan, Yair Hanani, Ron Banner, and Daniel Soudry. Improving post training neural quantization: Layer-wise calibration and integer programming. arXiv preprint arXiv:2006.10518, 2020. +Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2704-2713, 2018. + +Sambhav R Jain, Albert Gural, Michael Wu, and Chris H Dick. Trained quantization thresholds for accurate and efficient fixed-point inference of deep neural networks. arXiv preprint arXiv:1903.08066, 2019. +Qing Jin, Linjie Yang, and Zhenyu Liao. Adabits: Neural network quantization with adaptive bitwidths. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2146-2156, 2020a. +Qing Jin, Linjie Yang, Zhenyu Liao, and Xiaoning Qian. Neural network quantization with scale-adjusted training. In BMVC, 2020b. +Qing Jin, Jian Ren, Oliver J Woodford, Jiazhuo Wang, Geng Yuan, Yanzhi Wang, and Sergey Tulyakov. Teachers do more than teach: Compressing image-to-image models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13600-13611, 2021. +Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W Mahoney, and Kurt Keutzer. I-bert: Integer-only bert quantization. arXiv preprint arXiv:2101.01321, 2021. +Sungrae Kim and Hyun Kim. Zero-centered fixed-point quantization with iterative retraining for deep convolutional neural network-based object detectors. IEEE Access, 9:20828-20839, 2021. +Raghuraman Krishnamoorthi. Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342, 2018. +Hamed F Langroudi, Zachariah Carmichael, David Pastuch, and Dhireesha Kudithipudi. Cheetah: Mixed low-precision hardware & software co-design framework for dnns on the edge. arXiv preprint arXiv:1908.02386, 2019. +Zhengang Li, Geng Yuan, Wei Niu, Pu Zhao, Yanyu Li, Yuxuan Cai, Xuan Shen, Zheng Zhan, Zhenglun Kong, Qing Jin, et al. Npas: A compiler-aware framework of unified network pruning and architecture search for beyond real-time mobile acceleration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14255-14266, 2021. +Tailin Liang, John Glossner, Lei Wang, Shaobo Shi, and Xiaotong Zhang. Pruning and quantization for deep neural network acceleration: A survey. Neurocomputing, 461:370-403, 2021. +Darryl Lin, Sachin Talathi, and Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. In International conference on machine learning, pp. 2849-2858. PMLR, 2016. +Ning Liu, Geng Yuan, Zhengping Che, Xuan Shen, Xiaolong Ma, Qing Jin, Jian Ren, Jian Tang, Sijia Liu, and Yanzhi Wang. Lottery ticket preserves weight correlation: Is it desirable or not? In International Conference on Machine Learning, pp. 7011-7020. PMLR, 2021. +Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE international conference on computer vision, pp. 2736-2744, 2017. +Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. arXiv preprint arXiv:1810.05270, 2018. +Ping Luo, Xinjiang Wang, Wenqi Shao, and Zhanglin Peng. Towards understanding regularization in batch normalization. arXiv preprint arXiv:1809.00846, 2018. +Xiaolong Ma, Wei Niu, Tianyun Zhang, Sijia Liu, Sheng Lin, Hongjia Li, Wujie Wen, Xiang Chen, Jian Tang, Kaisheng Ma, et al. An image enhancing pattern-based sparsity for real-time inference on mobile devices. In European Conference on Computer Vision, pp. 629-645. Springer, 2020. +Xiaolong Ma, Geng Yuan, Xuan Shen, Tianlong Chen, Xuxi Chen, Xiaohan Chen, Ning Liu, Minghai Qin, Sijia Liu, Zhangyang Wang, et al. Sanity checks for lottery tickets: Does your winning ticket really win the jackpot? Advances in Neural Information Processing Systems, 34, 2021a. +Yuexiao Ma, Taisong Jin, Xiawu Zheng, Yan Wang, Huixia Li, Guannan Jiang, Wei Zhang, and Rongrong Ji. Ompq: Orthogonal mixed precision quantization. arXiv preprint arXiv:2109.07865, 2021b. + +Jieru Mei, Yingwei Li, Xiaochen Lian, Xiaojie Jin, Linjie Yang, Alan Yuille, and Jianchao Yang. Atomnas: Fine-grained end-to-end neural architecture search. arXiv preprint arXiv:1912.09640, 2019. +Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, and Debbie Marr. Wprn: Wide reduced-precision networks. arXiv preprint arXiv:1709.01134, 2017. +Rahul Mishra, Hari Prabhat Gupta, and Tanima Dutta. A survey on deep neural network compression: Challenges, overview, and solutions. arXiv preprint arXiv:2010.03954, 2020. +Norbert Mitschke, Michael Heizmann, Klaus-Henning Noffz, and Ralf Wittmann. A fixed-point quantization technique for convolutional neural networks based on weight scaling. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 3836-3840. IEEE, 2019. +Markus Nagel, Mart van Baalen, Tijmen Blankevoort, and Max Welling. Data-free quantization through weight equalization and bias correction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1325-1334, 2019. +Markus Nagel, Marios Fournarakis, Rana Ali Amjad, Yelysei Bondarenko, Mart van Baalen, and Tijmen Blankevoort. A white paper on neural network quantization. arXiv preprint arXiv:2106.08295, 2021. +Nvidia. Nvidia models, 2021. URL https://ngc.nvidia.com/catalog/models/nvidia=resnet50_PYt_amp. +Sangyun Oh, Hyeonuk Sim, Sugil Lee, and Jondeun Lee. Automated log-scale quantization for low-cost deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 742-751, 2021. +Eunhyeok Park, Junwhan Ahn, and Sungjoo Yoo. Weighted-entropy-based quantization for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5456-5464, 2017. +Eunhyeok Park, Sungjoo Yoo, and Peter Vajda. Value-aware quantization for training and inference of neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 580-595, 2018. +QCOM. Qualcomm® hexagon™ dsp, 2019. URL https://developer_qualcomm.com/software/hexagon-dsp-sdk. +Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European conference on computer vision, pp. 525-542. Springer, 2016. +Oleg Sémery. Pytorchcv library, 2021. URL https://pypi.org/project/pytorchcv/. +Steven W Smith et al. The scientist and engineer's guide to digital signal processing. 1997. +Shyam A Tailor, Javier Fernandez-Marques, and Nicholas D Lane. Degree-quant: Quantization-aware training for graph neural networks. arXiv preprint arXiv:2008.05000, 2020. +Lizhe Tan and Jean Jiang. Digital signal processing: fundamentals and applications. Academic Press, 2018. +Hidenori Tanaka, Daniel Kunin, Daniel LK Yamins, and Surya Ganguli. Pruning neural networks without any data by iteratively conserving synaptic flow. arXiv preprint arXiv:2006.05467, 2020. +Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, and Song Han. Haq: Hardware-aware automated quantization with mixed precision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8612-8620, 2019. +Peisong Wang, Qinghao Hu, Yifan Zhang, Chunjie Zhang, Yang Liu, and Jian Cheng. Two-step quantization for low-bit neural networks. In Proceedings of the IEEE Conference on computer vision and pattern recognition, pp. 4376-4384, 2018. + +Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev, and Paulius Micikevicius. Integer quantization for deep learning inference: Principles and empirical evaluation. arXiv preprint arXiv:2004.09602, 2020. +Linjie Yang and Qing Jin. Fracbits: Mixed precision quantization via fractional bit-widths. arXiv preprint arXiv:2007.02017, 1, 2020. +Zhaohui Yang, Yunhe Wang, Kai Han, Chunjing Xu, Chao Xu, Dacheng Tao, and Chang Xu. Searching for low-bit weights in quantized neural networks. arXiv preprint arXiv:2009.08695, 2020. +Zhewei Yao, Zhen Dong, Zhangcheng Zheng, Amir Gholami, Jiali Yu, Eric Tan, Leyuan Wang, Qijing Huang, Yida Wang, Michael Mahoney, et al. Hawq-v3: Dyadic neural network quantization. In International Conference on Machine Learning, 2021. +Geng Yuan, Xiaolong Ma, Wei Niu, Zhengang Li, Zhenglun Kong, Ning Liu, Yifan Gong, Zheng Zhan, Chaoyang He, Qing Jin, et al. Mest: Accurate and fast memory-economic sparse training framework on the edge. Advances in Neural Information Processing Systems, 34, 2021. +Xishan Zhang, Shaoli Liu, Rui Zhang, Chang Liu, Di Huang, Shiyi Zhou, Jiaming Guo, Qi Guo, Zidong Du, Tian Zhi, et al. Fixed-point back-propagation training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2330-2338, 2020. +Kang Zhao, Sida Huang, Pan Pan, Yinghan Li, Yingya Zhang, Zhenyu Gu, and Yinghui Xu. Distribution adaptive int8 quantization for training cnns. In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021a. +Sijie Zhao, Tao Yue, and Xuemei Hu. Distribution-aware adaptive multi-bit quantization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9281-9290, 2021b. +Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044, 2017. +Aojun Zhou, Anbang Yao, Kuan Wang, and Yurong Chen. Explicit loss-error-aware quantization for low-bit deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9426-9435, 2018. +Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016. +Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. arXiv preprint arXiv:1612.01064, 2016. +Feng Zhu, Ruihao Gong, Fengwei Yu, Xianglong Liu, Yanfei Wang, Zhelong Li, Xiuqi Yang, and Junjie Yan. Towards unified int8 training for convolutional neural network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1969-1979, 2020. + +# 7 APPENDIX + +# 7.1 MORE EXPERIMENTAL DETAILS + +More Details for Conventional Training. For conventional training method, we train the quantized model initialized with a pre-trained full-precision one. The training of full-precision and quantized models shares the same hyperparameters, including learning rate and its scheduler, weight decay, number of epochs, optimizer, and batch size. For ResNet18 and MobileNet V1, we use an initial learning rate of 0.05, and for MobileNet V2, it is 0.1. We find the value of learning rate, i.e., 0.1 and 0.05, does not have much impact on the final performance. Totally, 150 epochs of training are conducted, with cosine learning rate scheduler without restart. The warmup strategy is adopted with linear increasing (batchsize/256 × 0.05) (Goyal et al., 2017) during the first five epochs before cosine learning rate scheduler. The input image is randomly cropped to $224 \times 224$ and randomly flipped horizontally, and is kept as 8-bit unsigned fixed-point numbers with $\mathrm{FL} = 8$ and without standardization. For ResNet18 and MobileNet V1/V2, we use batch size of 2048 and run the experiments on 8 A100 GPUs. The parameters are updated with SGD optimizer and Nesterov momentum with a momentum weight of 0.9 without damping. The original structure of MobileNet V2 uses ReLU6 as its activation. Since our unified PACT and the fixed-point quantization already has clipping operation, and can be equivalently formulated with ReLU6 by rescaling weight or activation, we eliminate ReLU6 in our implementation. + +Discussion for Weight Decay. We set weight decay to $4 \times 10^{-5}$ , and find the weight decay scheme is critical for good performance, especially for the quantized model. We analyze weight decay for different models as follows: + +- For ResNet18, we apply weight decay on all layers, including convolution, fully-connected, and BN layers. +- For MobileNet V1, previous methods only apply weight decay on conventional convolution and fully-connected layers, but not on depthwise convolution and BN (Howard et al., 2017). We find this leads to the overfitting problem, making some early convolution layers have large weights, which is not friendly for quantization. We further observe that some channels of some depthwise convolution layers have all zero inputs, due to some channels of previous layer become all negative and ReLU is applied afterwards, making the running statistics of the corresponding channels in the following BN layer almost zero. This breaks the regularization effect of BN (Luo et al., 2018). Since each output channel only depends on one input channel for depthwise convolution layers, the weights connecting them become uncontrolled, and the effective weights become large, leading to an overfitting problem. Applying weight decay on the depthwise convolution and BN layers helps to alleviate this problem, and the resulting effective weights become small. +- For MobileNet V2, we find overfitting plays the role of reducing the validation error (although the training error is lower), and applying weight decay on depthwise convolution or BN weights impairs the training procedure. The underlying reason might be related to the residual connecting structure of this model (note MobileNet V1 does not use residual connection). + +In summary, we apply weight decay on all layers, including depthwise convolution and BN layers for ResNet18 and MobileNet V1, and do not apply weight decay on depthwise convolution and BN layers for MobileNet V2. + +More Details for Tiny Fine-tuning. For tiny fine-tuning on full-precision models, we follow the same strategy proposed in Yao et al. (2021). Specifically, we use a constant learning rate of $10^{-4}$ with 500 iterations of fine-tuning (or equivalently data ratio of around 0.05 with batch size of 128). Different from (Yao et al., 2021), we find fixed BN is not helpful, and we allow it to update during the whole fine-tuning step. As mentioned in Sec. 5, we apply grid search to determine the fractional lengths for both weight and input, as the training cost is very small and applying grid search does not introduce too much effort or training time. Also, since the original full-precision model uses the normalized input, we also apply normalization on the images and quantize images with signed fixed-point numbers (and format determined with grid search) before being fed into the first convolution layer of the model. + +# 7.2 MORE DETAILS FOR STATISTICAL ANALYSIS + +For the toy example in Fig. 3, we sample 10,000 zero-mean Gaussian random variables with different standard deviations, and apply ReLU activation for the rectified Gaussian variables with unsigned quantization. The variables are then quantized with fixed-point quantization given in (2) and (9), respectively. We calculate the relative quantization error and plot against the standard deviation for each fixed-point format. Note that zero-mean is a reasonable simplifying assumption if we assume to neglect the impact of bias in BN for analysis purposes. + +# 7.3 DERIVATION FOR FIXED-POINT AND PACT RELATION + +Here we derive the relationship between PACT and fixed-point quantization shown in (4). Specifically, the PACT quantization in (3) can be formulated as follows for positive $\alpha$ : + +$$ +\begin{array}{l} \operatorname {P A C T} (x) = \frac {\alpha}{M} \operatorname {r o u n d} \left(\frac {M}{\alpha} \operatorname {c l i p} (x, 0, \alpha)\right) (7a) \\ = \frac {\alpha}{M} \text {r o u n d} \left(M \operatorname {c l i p} \left(\frac {x}{\alpha}, 0, 1\right)\right) (7b) \\ = \frac {\alpha}{M} \operatorname {r o u n d} \left(\frac {M}{2 ^ {\mathrm {W L}} - 1} \operatorname {c l i p} \left(\frac {2 ^ {\mathrm {W L}} - 1}{\alpha} x, 0, 2 ^ {\mathrm {W L}} - 1\right)\right) (7c) \\ = \frac {2 ^ {\mathrm {W L}} - 1}{M} \frac {2 ^ {\mathrm {F L}} \alpha}{2 ^ {\mathrm {W L}} - 1} \frac {1}{2 ^ {\mathrm {F L}}} \text {r o u n d} \left(\frac {M}{2 ^ {\mathrm {W L}} - 1} \operatorname {c l i p} \left(\frac {2 ^ {\mathrm {W L}} - 1}{2 ^ {\mathrm {F L}} \alpha} x * 2 ^ {\mathrm {F L}}, 0, 2 ^ {\mathrm {W L}} - 1\right)\right). (7d) \\ \end{array} +$$ + +For $M = 2^{\mathrm{WL}} - 1$ , which is the typical setting for quantization, we have: + +$$ +\operatorname {P A C T} (x) = \frac {2 ^ {\mathrm {F L}} \alpha}{2 ^ {\mathrm {W L}} - 1} \frac {1}{2 ^ {\mathrm {F L}}} \text {r o u n d} \left(\operatorname {c l i p} \left(\frac {2 ^ {\mathrm {W L}} - 1}{2 ^ {\mathrm {F L}} \alpha} x * 2 ^ {\mathrm {F L}}, 0, 2 ^ {\mathrm {W L}} - 1\right)\right). \tag {8} +$$ + +Comparing with the expression for fixed-point quantization (2), we can immediately get (4). + +# 7.4 DOUBLE SIDE QUANTIZATION FOR WEIGHT AND MOBILENET V2 + +In (2), we only give the formula for fixed-point quantization of unsigned case. For weight and activation from some layer without following ReLU nonlinearity (such as some layers in MobileNet V2), signed quantization is necessary, and the expression is similarly given as: + +$$ +\operatorname {f i x \_ q u a n t} (x) = \frac {1}{2 ^ {\mathrm {F L}}} \operatorname {r o u n d} \left(\operatorname {c l i p} \left(x \cdot 2 ^ {\mathrm {F L}}, - 2 ^ {\mathrm {W L} - 1} + 1, 2 ^ {\mathrm {W L} - 1} - 1\right)\right), \tag {9} +$$ + +where clip is the clipping function, and $0 \leq \mathrm{FL} \leq \mathrm{WL} - 1$ . + +# 7.5 DERIVATION OF EFFECTIVE WEIGHT + +Here we derive the equation of effective weights relating two adjacent layers in Sec. 4.3. Specifically, for a Conv-BN-ReLU block with conventional PACT quantization using input clipping, quantization and dequantization, the general procedure can be described as + +$$ +\text {N o n l i n e a r :} \quad \widetilde {x} _ {i} ^ {(l)} = \operatorname {c l i p} \left(x _ {i} ^ {(l - 1)}, 0, \alpha^ {(l)}\right), \tag {10a} +$$ + +$$ +\text {I n p u t Q u a n t (u i n t 8) :} \quad \widehat {q} _ {i} ^ {(l)} = \operatorname {r o u n d} \left(\frac {M}{\alpha^ {(l)}} \widetilde {x} _ {i} ^ {(l)}\right), \tag {10b} +$$ + +$$ +\text {I n p u t D e q u a n t :} \quad \widetilde {q} _ {i} ^ {(l)} = \frac {\alpha^ {(l)}}{M} \widehat {q} _ {i} ^ {(l)}, \tag {10c} +$$ + +$$ +\text {C o n v :} y _ {i} ^ {(l)} = \sum_ {j = 1} ^ {n ^ {(l)}} W _ {i j} ^ {(l)} \widetilde {q} _ {j} ^ {(l)}, \tag {10d} +$$ + +$$ +\mathrm {B N}: \quad x _ {i} ^ {(l)} = \gamma_ {i} ^ {(l)} \frac {y _ {i} ^ {(l)} - \mu_ {i} ^ {(l)}}{\sigma_ {i} ^ {(l)}} + \beta_ {i} ^ {(l)} \tag {10e} +$$ + +$$ += \frac {\gamma_ {i} ^ {(l)}}{\sigma_ {i} ^ {(l)}} y _ {i} ^ {(l)} + \left(\beta_ {i} ^ {(l)} - \frac {\gamma_ {i} ^ {(l)}}{\sigma_ {i} ^ {(l)}} \mu_ {i} ^ {(l)}\right), \tag {10f} +$$ + +where $x$ is the input before clipping, $\widehat{q}$ is the integer input after quantization, $\widetilde{q}$ is the full-precision input after dequantization, clip is the clipping function, $\alpha$ is the clipping-level, $M = 2^{\mathrm{WL}} - 1$ is the scaling for quantization, $W_{ij}$ is weight from convolution layer, and $\gamma, \beta, \sigma, \mu$ are weight, bias, running standard deviation, and running mean from BN layer, respectively, and $i$ and $j$ are spatial indices. We first note that (10a), (10b) and (10c) can be combined as: + +$$ +\begin{array}{l} \widetilde {q} _ {i} ^ {(l)} = \operatorname {P A C T} \left(x _ {i} ^ {(l - 1)}\right) (11a) \\ = \eta_ {\text {f i x}} ^ {(l)} \text {f i x} _ {-} \text {q u a n t} \left(\frac {1}{\eta_ {\text {f i x}} ^ {(l)}} x _ {i} ^ {(l - 1)}\right) (11b) \\ = \eta_ {\text {f i x}} ^ {(l)} q _ {i} ^ {(l)}, (11c) \\ \end{array} +$$ + +where $q$ is the fixed-point activation and we have used the relationship given by (4) and the definition in (5). From this we can derive that: + +$$ +\begin{array}{l} q _ {i} ^ {(l + 1)} = \text {f i x - q u a n t} \left(\frac {1}{\eta_ {\text {f i x}} ^ {(l + 1)}} x _ {i} ^ {(l)}\right) (12a) \\ = \operatorname {f i x} \text {q u a n t} \left(\frac {1}{\eta_ {\text {f i x}} ^ {(l + 1)}} \left(\frac {\gamma_ {i} ^ {(l)}}{\sigma_ {i} ^ {(l)}} y _ {i} ^ {(l)} + \left(\beta_ {i} ^ {(l)} - \frac {\gamma_ {i} ^ {(l)}}{\sigma_ {i} ^ {(l)}} \mu_ {i} ^ {(l)}\right)\right)\right) (12b) \\ = \operatorname {f i x} _ {-} \operatorname {q u a n t} \left(\frac {1}{\eta_ {\text {f i x}} ^ {(l + 1)}} \left(\frac {\gamma_ {i} ^ {(l)}}{\sigma_ {i} ^ {(l)}} \sum_ {j = 1} ^ {n ^ {(l)}} W _ {i j} ^ {(l)} \widetilde {q} _ {j} ^ {(l)} + \left(\beta_ {i} ^ {(l)} - \frac {\gamma_ {i} ^ {(l)}}{\sigma_ {i} ^ {(l)}} \mu_ {i} ^ {(l)}\right)\right)\right) (12c) \\ = \operatorname {f i x} _ {-} \operatorname {q u a n t} \left(\frac {1}{\eta_ {\text {f i x}} ^ {(l + 1)}} \left(\frac {\gamma_ {i} ^ {(l)}}{\sigma_ {i} ^ {(l)}} \sum_ {j = 1} ^ {n ^ {(l)}} W _ {i j} ^ {(l)} \eta_ {\text {f i x}} ^ {(l)} q _ {j} ^ {(l)} + \left(\beta_ {i} ^ {(l)} - \frac {\gamma_ {i} ^ {(l)}}{\sigma_ {i} ^ {(l)}} \mu_ {i} ^ {(l)}\right)\right)\right) (12d) \\ = \operatorname {f i x} _ {-} \operatorname {q u a n t} \left(\sum_ {j = 1} ^ {n ^ {(l)}} \frac {\gamma_ {i} ^ {(l)}}{\sigma_ {i} ^ {(l)}} \frac {\eta_ {\text {f i x}} ^ {(l)}}{\eta_ {\text {f i x}} ^ {(l + 1)}} W _ {i j} ^ {(l)} q _ {j} ^ {(l)} + \frac {1}{\eta_ {\text {f i x}} ^ {(l + 1)}} \left(\beta_ {i} ^ {(l)} - \frac {\gamma_ {i} ^ {(l)}}{\sigma_ {i} ^ {(l)}} \mu_ {i} ^ {(l)}\right)\right), (12e) \\ \end{array} +$$ + +which is just (6). + +# 7.6 PRIVATE FRACTIONAL LENGTHS ENABLING DIFFERENT CLIPPING-LEVELS + +Here we analyze the effect of using private fractional lengths between sibling layers to indicate that this effectively enables private clipping-levels for them. In fact, the original PACT quantization step is given as + +$$ +\begin{array}{l} \widetilde {q} = \operatorname {P A C T} (x) (13a) \\ = \frac {2 ^ {\mathrm {F L}} \alpha}{2 ^ {\mathrm {W L}} - 1} \frac {1}{2 ^ {\mathrm {F L}}} \text {r o u n d} \left(\operatorname {c l i p} \left(\frac {2 ^ {\mathrm {W L}} - 1}{2 ^ {\mathrm {F L}} \alpha} x * 2 ^ {\mathrm {F L}}, 0, 2 ^ {\mathrm {W L}} - 1\right)\right), (13b) \\ \end{array} +$$ + +where we have omitted layer and spatial indices for simplification. Now if we use private fractional lengths for sibling layers while require them to share the same clipping level, and use the master child's fractional length for calculating the effective weight in (6), denoting the fractional length of the master layer as $\mathrm{FL}^{\mathrm{m}}$ , the above function becomes + +$$ +\begin{array}{l} \widetilde {q} = \frac {2 ^ {\mathrm {F L}} \alpha}{2 ^ {\mathrm {W L}} - 1} \frac {1}{2 ^ {\mathrm {F L}}} \text {r o u n d} \left(\operatorname {c l i p} \left(\frac {2 ^ {\mathrm {W L}} - 1}{2 ^ {\mathrm {F L} ^ {\mathrm {m}}} \alpha} x * 2 ^ {\mathrm {F L}}, 0, 2 ^ {\mathrm {W L}} - 1\right)\right) (14a) \\ = 2 ^ {\mathrm {F L} - \mathrm {F L} ^ {\mathrm {m}}} \frac {2 ^ {\mathrm {F L}} \alpha^ {\prime}}{2 ^ {\mathrm {W L}} - 1} \frac {1}{2 ^ {\mathrm {F L}}} \text {r o u n d} \left(\operatorname {c l i p} \left(\frac {2 ^ {\mathrm {W L}} - 1}{2 ^ {\mathrm {F L}} \alpha^ {\prime}} x * 2 ^ {\mathrm {F L}}, 0, 2 ^ {\mathrm {W L}} - 1\right)\right), (14b) \\ \end{array} +$$ + +where $\alpha' = 2^{\mathrm{FL}^{\mathrm{m}} - \mathrm{FL}}\alpha$ . From this we see that using private fractional lengths effectively enables different clipping-levels between sibling layers, and the cost is only some bit shifting. + +![](images/942f61f5e0c8c70f7aaea9795d94855be10c014f7cf23630a2806e3040d3c0a0.jpg) +Figure 6: Fractional lengths of each layer for a well-trained fixed-point model for ResNet50. + +Table 3: Analysis of the impact of the searching space for fractional length (ResNet50 on ImageNet). + +
MethodFrac. Len. RangeBLTop-1
Baseline (FP)-77.677.6
F8Net (ours)6, 7, 877.672.4
F8Net (ours)0 - 877.677.6
+ +# 7.7 MORE DISCUSSION OF THE OPTIMAL FRACTIONAL LENGTH + +Here we give some further discussion of using standard deviation to determine the optimal fractional length. The main reason is that standard deviation is a more robust statistics than others, such as dynamic range, and is an easily-estimated parameter for Gaussian distributed weights and preactivations. Considering depth-wise convolution layers that contain much fewer weights and inputs, using robust statistics becomes essential as these layers might include weights or inputs with strange behavior, e.g., the pre-activation values of some channels become all negative with large magnitude. Therefore, the standard deviation is more suitable and robust than the dynamic range. + +# 7.8 FRACTIONAL LENGTH FOR RESNET50 + +Here we provide more results of fractional lengths distribution in Fig. 6 for the well-trained ResNet50 with 8-bit fixed-point numbers finetuned from the Baseline #2 in Table 2b. As we can see, the optimal fractional lengths are layer-dependent and their distribution is highly different from those in MobileNet V2 (as shown in Fig. 2b). Specifically, for MobileNet V2, some layers have vanishing weight fractional lengths and less than $4\%$ of all layers have an activation fractional length less than 4, while for ResNet50, more than $88\%$ of all layers have an activation fractional length that is less or equal to 4. + +# 7.9 ANALYZING SEARCHING SPACE OF FRACTIONAL LENGTHS + +In the main paper, we adopt the largest possible searching space for the fractional lengths of 8-bit fixed-point. As shown in Fig. 2b, many layers have a fractional length less than 4, either for input or + +weight. Here we study whether it is possible to use only fractional lengths between 6 and 8. To this end, we finetune on ResNet50b using the Baseline #1. The results are listed in Table 3, from which we find that restricting the fractional lengths between 6 to 8 significantly impacts the performance of the final quantized model, as the top-1 accuracy drops from $77.6\%$ to $72.4\%$ . \ No newline at end of file diff --git a/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/images.zip b/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..80230dc23cd6668cec0b3bd605046bfe50885b4e --- /dev/null +++ b/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0e211f624f82730aa8f1a7ec76feef6cb63f8800b46fa33124c961bf8faacd67 +size 663225 diff --git a/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/layout.json b/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c8d518c6ddd0ffe6954ffe0c39f2bc1e90ed4f01 --- /dev/null +++ b/f8netfixedpoint8bitonlymultiplicationfornetworkquantization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cd1c56646aea4b763c370fdc2d94e222c1dd02767bae3d207eaea1b5ee4e641a +size 512108 diff --git a/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/6b175ec8-ef7b-42c4-8b94-6849c1898c6c_content_list.json b/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/6b175ec8-ef7b-42c4-8b94-6849c1898c6c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ab37845b66f0a99cd7f157fb5d77aa864f7f7b0c --- /dev/null +++ b/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/6b175ec8-ef7b-42c4-8b94-6849c1898c6c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a34654c51273eb5625d2d673be14c90d60ba311fa01bdb358f5f74ab70b242f5 +size 150804 diff --git a/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/6b175ec8-ef7b-42c4-8b94-6849c1898c6c_model.json b/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/6b175ec8-ef7b-42c4-8b94-6849c1898c6c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3bdcb20f788e3982c2832a6bc1df2977f813cab4 --- /dev/null +++ b/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/6b175ec8-ef7b-42c4-8b94-6849c1898c6c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5615fb16f7c766004893fac406a7d2b36e659c635fd07abb9353ede1ac08b60e +size 177323 diff --git a/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/6b175ec8-ef7b-42c4-8b94-6849c1898c6c_origin.pdf b/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/6b175ec8-ef7b-42c4-8b94-6849c1898c6c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bd6a9e715f111ebd759f1ffdc8ea2a93f8a06b35 --- /dev/null +++ b/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/6b175ec8-ef7b-42c4-8b94-6849c1898c6c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20bed5ed78fd3cc9472cc130f7f35e50b7ec3f2f1a5076c5c71f236fc164fe93 +size 23931252 diff --git a/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/full.md b/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/full.md new file mode 100644 index 0000000000000000000000000000000000000000..77e7f46c32862e29ccd769a741d21e28f7194be1 --- /dev/null +++ b/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/full.md @@ -0,0 +1,588 @@ +# FILTERED-COPHY: UNSUPERVISED LEARNING OF COUNTERFACTUAL PHYSICS IN POLXEL SPACE + +Steeven Janny + +LIRIS, INSA Lyon, France + +steeven.janny@insa-lyon.fr + +Fabien Baradel + +Naver Labs Europe, France + +fabien.paradel@naverlabs.com + +Natalia Neverova + +Meta AI + +nneverova@fb.com + +Madiha Nadri + +LAGEPP, Univ. Lyon 1, France + +madiha.nadri-wolf@univ-lyon1.fr + +Greg Mori + +Simon Fraser Univ., Canada + +mori@cs.sfu.ca + +Christian Wolf + +LIRIS, INSA Lyon, France + +christian.wolf@insa-lyon.fr + +# ABSTRACT + +Learning causal relationships in high-dimensional data (images, videos) is a hard task, as they are often defined on low-dimensional manifolds and must be extracted from complex signals dominated by appearance, lighting, textures and also spurious correlations in the data. We present a method for learning counterfactual reasoning of physical processes in pixel space, which requires the prediction of the impact of interventions on initial conditions. Going beyond the identification of structural relationships, we deal with the challenging problem of forecasting raw video over long horizons. Our method does not require the knowledge or supervision of any ground truth positions or other object or scene properties. Our model learns and acts on a suitable hybrid latent representation based on a combination of dense features, sets of 2D keypoints and an additional latent vector per keypoint. We show that this better captures the dynamics of physical processes than purely dense or sparse representations. We introduce a new challenging and carefully designed counterfactual benchmark for predictions in pixel space and outperform strong baselines in physics-inspired ML and video prediction. + +# 1 INTRODUCTION + +Reasoning on complex, multi-modal and high-dimensional data is a natural ability of humans and other intelligent agents (Martin-Ordas et al., 2008), and one of the most important and difficult challenges of AI. While machine learning is well suited for capturing regularities in high-dimensional signals, in particular by using high-capacity deep networks, some applications also require an accurate modeling of causal relationships. This is particularly relevant in physics, where causation is considered as a fundamental axiom. In the context of machine learning, correctly capturing or modeling causal relationships can also lead to more robust predictions, in particular better generalization to out-of-distribution samples, indicating that a model has overcome the exploitation of biases and shortcuts in the training data. In recent literature on physics-inspired machine learning, causality has often been forced through the addition of prior knowledge about the physical laws that govern the studied phenomena, e.g. (Yin et al., 2021). A similar idea lies behind structured causal models, widely used in the causal inference community, where domain experts model these relationships directly in a graphical notation. This particular line of work allows to perform predictions beyond statistical forecasting, for instance by predicting unobserved counterfactuals, the impact of unobserved interventions (Balke & Pearl, 1994) — "What alternative outcome would have happened, if the observed event $X$ had been replaced with an event $Y$ (after an intervention)". Counterfactuals are interesting, as causality intervenes through the effective modification of an outcome. As an example, taken from (Schölkopf et al., 2021), an agent can identify the direction of a causal relationship between an umbrella and rain from the fact that removing an umbrella will not affect the weather. + +We focus on counterfactual reasoning on high-dimensional signals, in particular videos of complex physical processes. Learning such causal interactions from data is a challenging task, as spurious correlations are naturally and easily picked up by trained models. Previous work in this direction + +was restricted to discrete outcomes, as in CLEVRER (Yi et al., 2020), or to the prediction of 3D trajectories, as in CoPhy (Baradel et al., 2020), which also requires supervision of object positions. In this work, we address the hard problem of predicting the alternative (counterfactual) outcomes of physical processes in pixel space, i.e. we forecast sequences of 2D projective views of the 3D scene, requiring the prediction over long horizons (150 frames corresponding to $\sim 6$ seconds). We conjecture that causal relationships can be modeled on a low dimensional manifold of the data, and propose a suitable latent representation for the causal model, in particular for the estimation of the confounders and the dynamic model itself. Similar to V-CDN (Kulkarni et al., 2019; Li et al., 2020), our latent representation is based on the unsupervised discovery of keypoints, complemented by additional information in our case. Indeed, while keypoint-based representations can easily be encoded from visual input, as stable mappings from images to points arise naturally, we claim that they are not the most suitable representation for dynamic models. We identified and addressed two principal problems: (i) the individual points of a given set are discriminated through their 2D positions only, therefore shape, geometry and relationships between multiple moving objects need to be encoded through the relative positions of points to each other, and (ii) the optimal representation for a physical dynamic model is not necessarily a 2D keypoint space, where the underlying object dynamics has also been subject to the imaging process (projective geometry). + +We propose a new counterfactual model, which learns a sparse representation of visual input in the form of 2D keypoints coupled with a (small) set of coefficients per point modeling complementary shape and appearance information. Confounders (object masses and initial velocities) in the studied problem are extracted from this representation, and a learned dynamic model forecasts the entire trajectory of these keypoints from a single (counterfactual) observation. Building on recent work in data-driven analysis of dynamic systems (Janny et al., 2021; Peralez & Nadri, 2021), the dynamic model is presented in a higher-dimensional state space, where dynamics are less complex. We show, that these design choices are key to the performance of our model, and that they significantly improve the capability to perform long-term predictions. Our proposed model outperforms strong baselines for physics-informed learning of video prediction. + +We introduce a new challenging dataset for this problem, which builds on CoPhy, a recent counterfactual physics benchmark (Baradel et al., 2020). We go beyond the prediction of sequences of 3D positions and propose a counterfactual task for predictions in pixel space after interventions on initial conditions (displacing, re-orienting or removing objects). In contrast to the literature, our benchmark also better controls for the identifiability of causal relationships and counterfactual variables and provides more accurate physics simulation. + +# 2 RELATED WORK + +Counterfactual (CF) reasoning — and learning of causal relationships in ML was made popular by works of J. Pearl, e.g. (Pearl, 2000), which motivate and introduce mathematical tools detailing the principles of do-calculus, i.e. study of unobserved interventions on data. A more recent survey links these concepts to the literature in ML (Scholkopf et al., 2021). The last years have seen the emergence of several benchmarks for CF reasoning in physics. CLEVRR (Yi et al., 2020) is a visual question answering dataset, where an agent is required to answer a CF question after observing a video showing 3D objects moving and colliding. Li et al. (2020) introduce a CF benchmark with two tasks: a scenario where balls interact with each other according to unknown interaction laws (such as gravity or elasticity), and a scenario where clothes are folded by the wind. The agent needs to identify CF variables and causal relationships between objects, and to predict future frames. Cyphy (Baradel et al., 2020) clearly dissociates the observed experiment from the CF one, and contains three complex 3D scenarios involving rigid body dynamics. However, the proposed method relies on the supervision of 3D object positions, while our work does not require any meta data. + +Physics-inspired ML — and learning visual dynamics has been dealt early on with recurrent models (Srivastava et al., 2015; Finn et al., 2016; Lu et al., 2017), or GANs (Vondrick et al., 2016; Mathieu et al., 2016). Kwon & Park (2019) adopt a Cycle-GAN with two discriminator heads, in charge of identifying false images and false sequences in order to improve the temporal consistency of the model in long term prediction. Nonetheless, the integration of causal reasoning and prior knowledge in these models is not straightforward. Typical work in physics-informed models relies on disentanglement between physics-informed features and residual features (Villegas et al., 2017a; + +Denton & Birodkar, 2017) and may incorporate additional information based on the available priors on the scene (Villegas et al., 2017b; Walker et al., 2017). PhyDNet Le Guen & Thome (2020) explicitly disentangles visual features from dynamical features, which are supposed to follow a PDE. It achieves SOTA performance on Human3.6M (Ionescu et al., 2014) and Sea Surface Temperature (de Bezenac et al., 2018), but we show that it fails on our challenging benchmark. + +Keypoint detection — is a well researched problem in vision with widely used handcrafted baselines (Lowe, 1999). New unsupervised variants emerged recently and have been shown to provide a suitable object-centric representation, close to attention models, which simplify the use of physical and/or geometric priors (Locatello et al., 2020; Veerapaneni et al., 2020). They are of interest in robotics and reinforcement learning, where a physical agent has to interact with objects (Kulkarni et al., 2019; Manuelli et al., 2020; 2019). KeypointsNet (Suwajanakorn et al., 2018) is a geometric reasoning framework, which discovers meaningful keypoints in 3D through spatial coherence between viewpoints. Close to our work, (Minderer et al., 2019) proposes to learn a keypoints-based stochastic dynamic model. However, the model is not suited for CF reasoning in physics and may suffer from inconsistency in the prediction of dynamics over long horizons. + +# 3 THE FILTERED-COPHY BENCHMARK + +We build on CoPhy (Baradel et al., 2020), retaining its strengths, but explicitly focusing on a counterfactual scenario in pixel space and eliminating the ill-posedness of tasks we identified in the existing work. Each data sample is called an experiment, represented as a pair of trajectories: an observed one with initial condition $X_0 = \mathbf{A}$ and outcome $X_{t=1..T} = \mathbf{B}$ (a sequence), and a counterfactual one $\bar{X}_0 = \mathbf{C}$ and $\bar{X}_{t=1..T} = \mathbf{D}$ (a sequence). Throughout this paper we will use the letters A, B, C and D to distinguish the different parts of each experiment. The initial conditions A and C are linked through a do-operator $\text{do}(X_0 = \mathbf{C})$ , which modifies the initial condition (Pearl, 2018). Experiments are parameterized by a set of intrinsic physical parameters $z$ which are not observable from a single initial image A. We refer to these as confounders. As in CoPhy, in our benchmark the do-operator is observed during training, but confounders are not — they have been used to generate the data, but are not used during training or testing. Following (Pearl, 2018), the counterfactual task consists in inferring the counterfactual outcome D given the observed trajectory AB and the counterfactual initial state C, following a three-step process: + +① Abduction: use the observed data AB to compute the counterfactual variables, i.e. physical parameters, which are not affected by the do-operation. +$②$ Action: update the causal model; keep the same identified confounders and apply the dooperator, i.e. replace the initial state $\mathbf{A}$ by $\mathbf{C}$ +$③$ Prediction: Compute the counterfactual outcome $\mathbf{D}$ using the causal graph. + +The benchmark contains three scenarios involving rigid body dynamics. BlocktowerCF studies stable and unstable 3D cube towers, the confounders are masses. BallsCF focuses on 2D collisions between moving spheres (confounders are masses and initial velocities). CollisionCF is about collisions between a sphere and a cylinder (confounders are masses and initial velocities) (Fig. 1). + +Unlike CoPhy, our benchmark involves predictions in RGB pixel space only. The do-operation consists in visually observable interventions on $\mathbf{A}$ , such as moving or removing an object. The confounders cannot be identified from the single-frame observation $\mathbf{A}$ , identification requires the analysis of the entire AB trajectory. + +Identifiability of confounders — For an experiment (AB, CD, $z$ ) to be well-posed, the confounders $z$ must be retrievable from AB. For example, since the masses of a stable cube tower cannot be identified generally in all situations, it can be impossible to predict the counterfactual outcome of an unstable tower, as collisions are not resolvable without known masses. In contrast to CoPhy, we ensure that each experiment $\psi : (X_0, z) \mapsto X_{t=1..T}$ , given initial condition $X_0$ and confounders $z$ , is well posed and satisfies the following constraints: + +Definition 1 (Identifiability, (Pearl, 2018)) The experiment (AB, CD, $z$ ) is identifiable if, for any set of confounders $z'$ : + +$$ +\psi (\mathbf {A}, z) = \psi (\mathbf {A}, z ^ {\prime}) \Rightarrow \psi (\mathbf {C}, z) = \psi (\mathbf {C}, z ^ {\prime}). \tag {1} +$$ + +![](images/79f967182c8a355ad5bb2673d3febfe5a99c89c1802212c59f01c9a4e3dec7c0.jpg) +(a) BlocktowerCF (BT-CF) + +![](images/2e6d2f9547433056dab536223fe024eb98ebf28b8a0ef8696657dd60c62b1f04.jpg) +(b) CollisionCF (C-CF) +Figure 1: The Filtered-CoPhy benchmark suite contains three challenging scenarios involving 2D or 3D rigid body dynamics with complex interactions, including collision and resting contact. Initial conditions A are modified to C by an intervention. Initial motion is indicated through arrows. + +![](images/6669094f0afbe081d113841ea71fb9c19e6f01110d3bd4374d084f83d07749fb.jpg) +(c) BallsCF (B-CF) + +In an identifiable experiment there is no pair $(z,z^{\prime})$ that gives the same trajectory AB but different counterfactual outcomes CD. Details on implementation and impact are in appendix A.1. + +Counterfactuality — We enforce sufficient difficulty of the problem through the meaningfulness of confounders. We remove initial situations where the choice of confounder values has no significant impact on the final outcome: + +Definition 2 (Counterfactuality). Let $z^k$ be the set of confounders $z$ , where the $k^{th}$ value has been modified. The experiment (AB, CD, $z$ ) is counterfactual if and only if: + +$$ +\exists k: \psi (\mathbf {C}, z ^ {k}) \neq \psi (\mathbf {C}, z). \tag {2} +$$ + +In other words, we impose the existence of an object of the scene for which the (unobserved) physical properties have a determining effect on the trajectory. Details on how this constraint was enforced are given in appendix A.2. + +Temporal resolution - the physical laws we target involve highly non-linear phenomena, in particular collision and resting contacts. Collisions are difficult to learn because their actions are both intense, brief, and highly non-linear, depending on the geometry of the objects in 3D space. The temporal resolution of physical simulations is of prime importance. A parallel can be made with Nyquist-Shannon frequency, as a + +trajectory sampled with too low frequency cannot be reconstructed with precision. We simulate and record trajectories at 25 FPS, compared to 5 FPS chosen in CoPhy, justified with two experiments. Firstly, Fig. 2 shows the trajectories of the center of masses of cubes in BlocktowerCF, colored dots are shown at 25 FPS and black dots at 5 FPS. We can see that collisions with the ground fall below the sampling rate of 5 FPS, making it hard to infer physical laws from regularities in data at this frequency. A second experiment involves learning a prediction model at different frequencies, confirming the choice 25 FPS — details are given in appendix A.3. + +![](images/3ed793e5723d2e19559f26ee3336e7244f65594f46ddabb01e01b0f809dfc496.jpg) +Figure 2: Impact of temporal frequency on dynamics, 3D trajectories of each cube are shown. Black dots are sampled at 5 FPS, colored dots at 25 FPS. Collisions between the red cube and the ground are not well described by the black dots, making it hard to infer physical laws from regularities in data. + +# 4 UNSUPERVISED LEARNING OF COUNTERFACTUAL PHYSICS + +We introduce a new model for counterfactual learning of physical processes capable of predicting visual sequences $\mathbf{D}$ in the image space over long horizons. The method does not require any supervision other than videos of observed and counterfactual experiences. The code is publicly available online at https://filteredcophy.github.io. The model consists of three parts, learning the latent representation and its (counterfactual) dynamics: + +- The encoder (De-Rendering module) learns a hybrid representation of an image in the form of a (i) dense feature map and (ii) 2D keypoints combined with (iii) a low-dimensional vector of coef- + +![](images/2b70d3df4fb5d330ac81537ddd3c9de6d087658fcc30947e756a0f1ccfc4d6a2.jpg) +Figure 3: We de-render visual input into a latent space composed of a dense feature map $\mathbf{F}$ modeling static information, a set of keypoints $\mathbf{k}$ , and associated coefficients $\mathbf{c}$ . We here show the training configuration taking as input pairs $(\mathbf{X}_{\mathrm{source}},\mathbf{X}_{\mathrm{target}})$ of images. Without any supervision, a tracking strategy emerges naturally through the unsupervised objective: we optimize reconstruction of $\mathbf{X}_{\mathrm{target}}$ given features from $\mathbf{X}_{\mathrm{source}}$ and keypoints+coefficients from $\mathbf{X}_{\mathrm{target}}$ . + +ficients, see Fig. 3. Without any state supervision we show that the model learns a representation which encodes positions in keypoints and appearance and orientation in the coefficients. + +- The Counterfactual Dynamic (CoDy) model based on recurrent graph networks, in the lines of (Baradel et al., 2020). It estimates a latent representation of the confounders $z$ from the keypoint + coefficient trajectories of AB provided by the encoder, and then predicts D in this same space. +- The decoder that uses the predicted keypoints to generate a pixel-space representation of $\mathbf{D}$ . + +# 4.1 DISENTANGLING VISUAL INFORMATION FROM DYNAMICS + +The encoder takes an input image $\mathbf{X}$ and predicts a representation with three streams, sharing a common conv. backbone, as shown in Fig. 3. We propose an unsupervised training objective, which favors the emergence of a latent representation disentangling static and dynamic information. + +1. A dense feature map $\mathbf{F} = \mathcal{F}(\mathbf{X})$ , which contains static information, such as the background. +2. A set of 2D keypoints $\mathbf{k} = \mathcal{K}(\mathbf{X})$ , which carry positional information from moving objects. +3. A set of corresponding coefficients $\mathbf{c} = \mathcal{C}(\mathbf{X})$ , one vector $\mathbf{c}_k$ of size $C$ per keypoint $k$ , which encodes orientation and appearance information. + +The unsupervised objective is formulated on pairs of images $(\mathbf{X}_{\mathrm{source}},\mathbf{X}_{\mathrm{target}})$ randomly sampled from same D sequences (see appendix D.1 for details on sampling). Exploiting an assumption on the absence of camera motion1, the goal is to favor the emergence of disentangling static and dynamic information. To this end, both images are encoded, and the reconstruction of the target image is predicted with a decoder $\mathcal{D}$ fusing the source dense feature map and the target keypoints and coefficients. This formulation requires the decoder to aggregate dense information from the source and sparse values from the target, naturally leading to motion being predicted by the latter. + +On the decoder $\mathcal{D}$ side, we add inductive bias, which favors the usage of the 2D keypoint information in a spatial way. The 2D coordinates $\mathbf{k}_k$ for each keypoint $k$ are encoded as Gaussian heatmaps $\mathcal{G}(\mathbf{k}_k)$ , i.e. 2D Gaussian functions centered on the keypoint position. The additional coefficient information, carrying appearance information, is then used to deform the Gaussian mapping into an anisotropic shape using a fixed filter bank $\mathbf{H}$ , as follows: + +$$ +\mathcal {D} (\mathbf {F}, \mathbf {k}, \mathbf {c}) = \mathcal {R} \left(\mathbf {F}, \mathbf {G} _ {1} ^ {1}, \dots , \mathbf {G} _ {1} ^ {C}, \mathbf {G} _ {2} ^ {1}, \dots , \mathbf {G} _ {K} ^ {C}\right), \quad \mathbf {G} _ {k} ^ {i} = \mathbf {c} _ {k} ^ {C + 1} \mathbf {c} _ {k} ^ {i} \left(\mathcal {G} \left(\mathbf {k} _ {k}\right) * \mathbf {H} _ {i}\right), \tag {3} +$$ + +where $\mathcal{R}(\ldots)$ is a refinement network performing trained upsampling with transposed convolutions, whose inputs are stacked channelwise. $\mathbf{G}_k^i$ are Gaussian mappings produced from keypoint positions $\mathbf{k}$ , deformed by filters from bank $\mathbf{H}$ and weighted by coefficients $\mathbf{c}_k^i$ . The filters $\mathbf{H}_i$ are defined as fixed horizontal, vertical and diagonal convolution kernels. This choice is discussed in section 5. The joint encoding and decoding pipeline is illustrated in Fig. 3. + +The model is trained to minimize the mean squared error (MSE) reconstruction loss, regularized with a loss on spatial gradients $\nabla \mathbf{X}$ weighted by hyper-parameters $\gamma_1, \gamma_2 \in \mathbb{R}$ : + +$$ +\mathcal {L} _ {\text {d e r e n}} = \gamma_ {1} \left| \left| \mathbf {X} _ {\text {t a r g e t}} - \hat {\mathbf {X}} _ {\text {t a r g e t}} \right| \right| _ {2} ^ {2} + \gamma_ {2} \left| \left| \nabla \mathbf {X} _ {\text {t a r g e t}} - \nabla \hat {\mathbf {X}} _ {\text {t a r g e t}} \right| \right| _ {2} ^ {2}, \tag {4} +$$ + +where $\hat{X}_{\mathrm{target}} = \mathcal{D}(\mathcal{F}(\mathbf{X}_{\mathrm{source}}),\mathcal{K}(\mathbf{X}_{\mathrm{target}}),\mathcal{C}(\mathbf{X}_{\mathrm{target}}))$ is the reconstructed image, $\gamma_{1},\gamma_{2}$ are weights. + +Related work - our unsupervised objective is somewhat related to Transporter (Kulkarni et al., 2019), which, as our model, computes visual feature vectors $F_{\mathrm{source}}$ and $F_{\mathrm{target}}$ as well as 2D keypoints $K_{\mathrm{source}}$ and $K_{\mathrm{target}}$ , modeled as a 2D vector via Gaussian mapping. It leverages a handcrafted transport equation: $\hat{\Psi}_{\mathrm{target}} = F_{\mathrm{source}} \times (1 - K_{\mathrm{source}}) \times (1 - K_{\mathrm{target}}) + F_{\mathrm{target}} \times K_{\mathrm{target}}$ . As in our case, the target image is reconstructed through a refiner network $\hat{X}_{\mathrm{target}} = \mathcal{R}(\hat{\Psi}_{\mathrm{target}})$ . The transporter suffers from a major drawback when used for video prediction, as it requires parts of the target image to reconstruct the target image — the model was originally proposed in the context of RL and control, where reconstruction is not an objective. It also does not use shape coefficients, requiring shapes to be encoded by several keypoints, or abusively be carried through the dense features $F_{\mathrm{target}}$ . This typically leads to complex dynamics non representative of the dynamical objects. We conducted an in-depth comparison between the Transporter and our representation in appendix C.2. + +# 4.2 DYNAMIC MODEL AND CONFOUNDER ESTIMATION + +Our counterfactual dynamic model (CoDy) leverages multiple graph network (GN) based modules (Battaglia et al., 2016) that join forces to solve the counterfactual forecasting tasks of Filtered-CoPhy. Each one of these networks is a classical GN, abbreviated as $\mathcal{G}\mathcal{N}(\mathbf{x}_k)$ , which contextualizes input node embeddings $\mathbf{x}_k$ through incoming edge interactions $\mathbf{e}_{ik}$ , providing output node embeddings $\hat{\mathbf{x}}_k$ (parameters are not shared over the instances): + +$$ +\mathcal {G N} \left(\mathbf {x} _ {k}\right) = \hat {\mathbf {x}} _ {k}, \text {s u c h t h a t} \hat {\mathbf {x}} _ {k} = g \left(\mathbf {x} _ {k}, \sum_ {i} \mathbf {e} _ {i k}\right) \text {w i t h} \mathbf {e} _ {i j} = f \left(\mathbf {x} _ {i}, \mathbf {x} _ {j}\right), \tag {5} +$$ + +where $f$ is a message-passing function and $g$ is an aggregation function. + +We define the state of frame $\mathbf{X}_t$ at time $t$ as a stacked vector composed of keypoints and coefficients computed by the encoder (the de-rendering module), i.e. $\mathbf{s}(t) = [\mathbf{s}_1(t) \dots \mathbf{s}_k(t)]$ where $\mathbf{s}_k(t) = [\mathbf{k}_k \mathbf{c}_k^1 \dots \mathbf{c}_k^{C+1}](t)$ . In the lines of (Baradel et al., 2020), given the original initial condition and outcome $\mathbf{AB}$ , CoDy estimates an unsupervised representation $\mathbf{u}_k$ of the latent confounder variables per keypoint $k$ through the counterfactual estimator (CF estimator in Fig. 4). It first contextualizes the sequence $\mathbf{s}^{\mathbf{AB}}(t)$ through a graph network $\mathcal{GN}\big(\mathbf{s}^{\mathbf{AB}}(t)\big) = \mathbf{h}^{\mathbf{AB}}(t) = [\mathbf{h}_1(t) \dots \mathbf{h}_K(t)]$ . We then model the temporal evolution of this representation with a gated recurrent unit (Cho et al., 2014) per keypoint, sharing parameters over keypoints, taking as input the sequence $\mathbf{h}_k$ . Its last hidden vector is taken as the confounder estimate $\mathbf{u}_k$ . + +Recent works on the Koopman Operator (Lusch et al., 2018) and Kazantzis-Kravaris-Luenberger Observer (Janny et al., 2021; Peralez & Nadri, 2021) have theoretically shown that, under mild assumptions, there exists a latent space of higher dimension, where a dynamical system given as an EDP can have a simpler dynamics. Inspired by this idea, we used an encoder-decoder structure within CoDy, which projects our dynamic system into a higher-dimensional state space, performs forecasting of the dynamics in this latent space, and then projects predictions back to the original keypoint space. Note that this dynamics encoder/decoder is different from the encoder/decoder of the de-rendering-rendering modules discussed in section 4.1. The state encoder $\mathcal{E}(\mathbf{s}(t)) = \sigma(t)$ is modeled as a graph network $\mathcal{GN}$ , whose aggregation function projects into an output embedding space $\sigma(t)$ of dimension 256. The decoder $\Delta(\sigma(t)) = \mathbf{s}(t)$ temporally processes the individual contextualized states $\sigma(t)$ with a GRU, followed by new contextualization with a graph network $\mathcal{GN}$ . Details on the full architecture are provided in appendix D.2. + +The dynamic model CoDy performs forecasting in the higher-dimensional space $\pmb{\sigma}(t)$ , computing a displacement vector $\delta(t + 1)$ such that $\pmb{\sigma}(t + 1) = \pmb{\sigma}(t) + \delta(t + 1)$ . It takes the projected state embeddings $\sigma_k^{\mathrm{CD}}(t)$ per keypoint $k$ concatenated with the confounder representation $\mathbf{u}_k$ and contextualizes them with a graph network, resulting in embeddings $\mathbf{h}_k^{\mathrm{CD}}(t) = \mathcal{G}\mathcal{N}([\sigma_k^{\mathrm{CD}}(t),\mathbf{u}_k])$ , which are processed temporally by a GRU. We compute the displacement vector at time $t + 1$ + +![](images/f4f0390916fe659baa7620793140ab4f0f0b99d46b80fdb4e72d80a4b675c35e.jpg) +Figure 4: During training, we disconnect the dynamic prediction module (CoDy) from the rendering module (decoder). On test time, we reconnect the two modules. CoDy forecasts the counterfactual outcome D from the sparse keypoints representation of AB and C. The confounders are discovered in an unsupervised manner and provided to the dynamical model. + +
OursUV-CDNPhyDPred
N2NN2NNETRNN
BT-CFPSNR23.4824.6921.0721.9916.4922.04
L-PSNR25.5126.7922.3623.6423.0324.97
B-CFPSNR21.1921.3319.5119.5418.5622.31
L-PSNR23.8824.1222.3522.3822.5522.63
C-CFPSNR24.0924.0923.7323.8319.6924.70
L-PSNR26.0726.5526.0826.3424.6126.39
+ +
Copy BCopy COurs
BT-CF43.220.09.58
B-CF44.392.336.12
C-CF7.640.35.14
+ +Copy B: assumes absence of intervention (always outputs sequence B). + +Copy C: assumes that the tower is stable (always outputs input C). + +Table 1: Comparison with the state-of-the-art models in physics-inspired machine learning of video signals, reporting reconstruction error (PSNR and introduced L-PSNR). + +Table 2: Comparison with copying baselines. We report $\mathrm{MSE} \times 10^{-3}$ on prediction of keypoints + coefficients. + +as a linear transformation from the hidden state of the GRU. We apply the dynamic model in an auto-regressive way to forecast long-term trajectories in the projected space $\sigma_{k}$ , and apply the state decoder to obtain a prediction $\hat{\mathbf{s}}^{\mathbf{CD}}(t)$ . + +The dynamic model is trained with a loss in keypoint space, + +$$ +\mathcal {L} _ {\text {p h y s i c s}} = \sum_ {t} \| \mathbf {s} ^ {\mathbf {C D}} (t) - \hat {\mathbf {s}} ^ {\mathbf {C D}} (t) \| _ {2} ^ {2} + \gamma_ {3} \| \mathbf {s} ^ {\mathbf {C D}} (t) - \Delta (\mathcal {E} (\mathbf {s} ^ {\mathbf {C D}} (t))) \| _ {2} ^ {2}. \tag {6} +$$ + +The first term enforces the model to learn to predict the outcomes and the second term favors correct reconstruction of the state in keypoint space. The terms are weighted with a scalar parameter $\gamma_{3}$ . + +# 4.3 TRAINING + +End-to-end training of all three modules jointly is challenging, as the same pipeline controls both the keypoint-based state representation and the dynamic module (CoDy), involving two adversarial objectives: optimizing reconstruction pushes the keypoint encoding to be as representative as possible, but learning the dynamics favors a simple representation. Faced with these two contradictory tasks, the model is numerically unstable and rapidly converges to regression to the mean. As described above, we separately train the encoder+decoder pair without dynamic information on reconstruction only, c.f. Equation (4). Then we freeze the parameters of the keypoint detector and train CoDy to forecast the keypoints from $\mathbf{D}$ minimizing the loss in Equation (6). + +# 5 EXPERIMENTS + +We compare the proposed model to three strong baselines for physics-inspired video prediction. + +- PhyDNet (Le Guen & Thome, 2020) is a non-counterfactual video prediction model that forecasts future frames using a decomposition between (a) a feature vector that temporally evolves via an LSTM and (b) a dynamic state that follows a PDE learned through specifically designed cells. + +![](images/7c2b5544283324188d3c0ad59663a5cfc22c8c387ae8e2562b941bb89caa154a.jpg) +Figure 5: Network $\mathcal{R}$ learns a distortion from multiple oriented ellipses to target shapes. + +
# Coefficients: +# Keypoints:X
N2N4NN2N4N
BT-CFPSNR23.4824.6923.5422.7123.2823.17
L-PSNR21.7523.0321.8021.1821.8621.70
B-CFPSNR21.1921.3321.3720.4921.0920.97
L-PSNR27.8827.1627.0726.3327.0726.73
C-CFPSNR24.0924.0924.2623.8423.6624.06
L-PSNR23.3223.4623.4422.5822.8123.45
+ +Table 3: Impact of having additional orientation/shape coefficients $(\checkmark)$ compared to the keypoint-only solution $(X)$ , for different numbers of keypoints: equal to number of objects $(= N)$ , $2N$ and $4N$ . + +- V-CDN (Li et al., 2020) is a counterfactual model based on keypoints, close to our work. It identifies confounders from the beginning of a sequence and learns a keypoint predictor through auto-encoding using the Transporter equation (see discussion in Sect. 4.1). As it is, it cannot be used for video prediction and is incomparable with our work, see details in appendix C. We therefore replace the Transporter by our own de-rendering/ rendering modules, from which we remove the additional coefficients. We refer to this model as UV-CDN (for Unsupervised V-CDN). +- PredRNN (Wang et al., 2017) is a ConvLSTM-based video prediction model that leverages spatial and temporal memories a through a spatiotemporal LSTM cell. + +All models have been implemented in PyTorch, architectures are described in appendix D. For the baselines PhyDNet, UV-CDN and PredRNN, we used the official source code provided by the authors. We evaluate on each scenario of Filtered-CoPhy on the counterfactual video prediction task. For the two counterfactual models (Ours and UV-CDN), we evaluate on the tasks as intended: we provide the observed sequence AB and the CF initial condition C, and forecast the sequence D. The non-CF baselines are required to predict the entire video from a single frame, in order to prevent them from leveraging shortcuts in a part of the video and bypass the need for physical reasoning. + +We measure performance with time-averaged peak signal-to noise ratio (PSNR) that directly measures reconstruction quality. However, this metric is mainly dominated by error on the static background, which is not our main interest. We also introduce Localized PSNR (L-PSNR), which measures area error on the important regions near moving objects, computed on masked images. We compute the masks using classical background subtraction techniques. + +Comparison to the SOTA — We compare our model against UV-CDN, PhyDNet and PredRNN in Table 1, consistently and significantly outperforming the baselines. The gap with UV-CDN is particularly interesting, as it confirms the choice of additional coefficients to model the dynamics of moving objects. PredRNN shows competitive performances, especially on collisionCF. However, our localized PSNR tends to indicate that the baseline does not reconstruct accurately the foreground, favoring the reconstruction of the background to the detriment of the dynamics of the scene. Fig. 6 visualizes the prediction on a single example, more can be found in appendix G. We also compare to trivial copying baselines in Table 2, namely Copy B, which assumes no intervention and outputs the B sequence, and Copy C, which assumes a stable tower. We evaluate these models in keypoints space measuring MSE on keypoints and coefficients averaged over time, as copying baselines are unbeatable in the regions of static background, making the PSNR metrics unusable. + +We provide additional empirical results by comparing the models using Multi-object Tracking metrics and studies on the impact of the do-operations on PSNR in appendix E. We also compute an upper bound of our model using CoPhyNet baseline as described in Baradel et al. (2020). + +Performance on real-world data — is reported in appendix F, showing experiments on 516 videos of real wooden blocks introduced in (Lerer et al., 2016). + +Impact of appearance coefficients — are reported in Table 3, comparing to the baseline using a keypoint-only representation. The coefficients have a significant impact: even increasing the number of keypoints to compensate for the loss of information cannot overcome the advantage of disentangling positions and shapes, as done in our model. We provide a deeper analysis of the + +![](images/2aa4a13b8370142c4ce62512d26af9e2a4030846d668957ca1b2c53411c28e6d.jpg) +Figure 6: Visualization of the counterfactual video prediction quality, comparing our proposed model (Filtered CoPhy) with the two baselines, PhyDNet and UV-CDN, over different timestamps. + +
State auto-encoder:X
BT-CF9.5811.10
B-CF36.1236.88
C-CF5.1416.16
+ +
Filter bank:FixedLearned
BT-CF34.4032.04
B-CF37.7631.25
C-CF34.0933.88
+ +Table 4: Impact of the dynamic CoDy encoder (✓) against the baseline operating in the keypoint + coefficient space $(X)$ . We report $\mathrm{MSE} \times 10^{-3}$ on prediction of keypoints + coefficients (4 pts). + +Table 5: Learning the filter bank $\mathbf{H}$ from scratch has a mild negative effect on the reconstruction task. We report the PSNR on static reconstruction performance without the dynamic model. + +de-rendering/ rendering modules in appendix B, which includes visualizations of the navigation of the latent shape space in B.2. + +Learning filters — does not have a positive impact on reconstruction performance compared to the choice of the handcrafted bank, as can be seen in table 5. We conjecture that the additional degrees of freedom are redundant with parts of the filter kernels in the refinement module $\mathcal{R}$ : this corresponds to jointly learning a multi-channel representation $\{G_k^i\}_{k=1..K}^{i=1..C}$ for shapes as well as the mapping which geometrically distorts them into the target object shapes. Fixing the latent representation does not constrain the system, as the mapping $\mathcal{R}$ can adjust to it — see Fig. 5. + +Impact of the high-dimensional dynamic space — We evaluate the impact of modeling object dynamics in high-dimensional space through the CoDy encoder in Table 4, comparing projection to 256 dimensions to the baseline reasoning directly in keypoint + coefficient space. The experiment confirms this choice of KKL-like encoder (Janny et al., 2021). + +# 6 CONCLUSION + +We introduced a new benchmark for counterfactual reasoning in physical processes requiring to perform video prediction, i.e. predicting raw pixel observations over a long horizon. The benchmark has been carefully designed and generated imposing constraints on identifiability and counterfactuality. We also propose a new method for counterfactual reasoning, which is based on a hybrid latent representation combining 2D keypoints and additional latent vectors encoding appearance and shape. We introduce an unsupervised learning algorithm for this representation, which does not require any supervision on confounders or other object properties and processes raw video. Counterfactual prediction of video frames remains a challenging task, and Filtered CoPhy still exhibits failures in maintaining rigid structures of objects over long prediction time-scales. We hope that our benchmark will inspire further breakthroughs in this domain. + +# REFERENCES + +Alexander Balke and Judea Pearl. Counterfactual probabilities: Computational methods, bounds and applications. In UAI, 1994. +Fabien Baradel, Natalia Neverova, Julien Mille, Greg Mori, and Christian Wolf. Cophy: Counterfactual learning of physical dynamics. In International Conference on Learning Representations, 2020. +Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, and Koray kavukcuoglu. Interaction networks for learning about objects, relations and physics. In Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016. +Keni Bernardin and Rainer Stiefelhagen. Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. EURASIP Journal on Image and Video Processing, 2008. +Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing, 2014. +Emmanuel de Bezenac, Arthur Pajot, and Patrick Gallinari. Deep learning for physical processes: Incorporating prior scientific knowledge. In International Conference on Learning Representations, 2018. +Emily L. Denton and Vighnesh Birodkar. Unsupervised learning of disentangled representations from video. ArXiv, abs/1705.10915, 2017. +Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016. +Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014. +Steeven Janny, Madiha Nadri Vincent Andrieu, and Christian Wolf. Deep kkl: Data-driven output prediction for non-linear systems. In Conference on Decision and Control (CDC21), 2021. +Tejas D. Kulkarni, Ankush Gupta, Catalin Ionescu, Sebastian Borgeaud, Malcolm Reynolds, Andrew Zisserman, and Volodymyr Mnih. Unsupervised learning of object keypoints for perception and control. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019. +Y. Kwon and M. Park. Predicting future frames using retrospective cycle gan. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1811-1820, 2019. +Vincent Le Guen and Nicolas Thome. Disentangling physical dynamics from unknown factors for unsupervised video prediction. In Computer Vision and Pattern Recognition (CVPR), 2020. +Adam Lerer, Sam Gross, and Rob Fergus. Learning physical intuition of block towers by example. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48. JMLR.org, 2016. +Yunzhu Li, Antonio Torralba, Anima Anandkumar, Dieter Fox, and Animesh Garg. Causal discovery in physical systems from videos. Advances in Neural Information Processing Systems, 33, 2020. +Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, G. Heigold, Jakob Uszkoreit, A. Dosovitskiy, and Thomas Kipf. Object-centric learning with slot attention. *ArXiv*, abs/2006.15055, 2020. +David G. Lowe. Object recognition from local scale-invariant features. In ICCV, 1999. + +Chaochao Lu, Michael Hirsch, and Bernhard Scholkopf. Flexible spatio-temporal networks for video prediction. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. +Bethany Lusch, Nathan Kutz, and Steven Brunton. Deep learning for universal linear embeddings of nonlinear dynamics. Nature Communications, 2018. +Lucas Manuelli, Wei Gao, Peter R. Florence, and Russ Tedrake. kpm: Keypoint affordances for category-level robotic manipulation. ArXiv, abs/1903.06684, 2019. +Lucas Manuelli, Yunzhu Li, Peter R. Florence, and Russ Tedrake. Keypoints into the future: Self-supervised correspondence in model-based reinforcement learning. CoRR, 2020. +G. Martin-Ordas, J. Call, and F. Colmenares. Tubes, tables and traps: great apes solve twofunctionally equivalent trap tasks but show no evidence of transfer across tasks. Animal cognition, 11(3): 432-430, 2008. +Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. +Matthias Minderer, Chen Sun, Ruben Villegas, Forrester Cole, K. Murphy, and Honglak Lee. Unsupervised learning of object structure and dynamics from videos. ArXiv, abs/1906.07889, 2019. +Judea Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, 2000. +Judea Pearl. Causal and counterfactual inference. The Handbook of Rationality, pp. 1-41, 2018. +Johan Peralez and Madiha Nadri. Deep learning-based luenberger observer design for discrete-time non-linear systems. In Conference on Decision and Control (CDC21), 2021. +Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Toward causal representation learning. Proceedings of the IEEE, 2021. +Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representations using lstms. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, 2015. +Supasorn Suwajanakorn, Noah Snavely, Jonathan Tompson, and Mohammad Norouzi. Discovery of latent 3d keypoints via end-to-end geometric reasoning. In NeurIPS 2018, 2018. +Rishi Veerapaneni, John D. Co-Reyes, Michael Chang, Michael Janner, Chelsea Finn, Jiajun Wu, Joshua Tenenbaum, and Sergey Levine. Entity abstraction in visual model-based reinforcement learning. In Proceedings of the Conference on Robot Learning, 2020. +Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, and Honglak Lee. Decomposing motion and content for natural video sequence prediction. In ArXiv, volume abs/1706.08033, 2017a. +Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, and Honglak Lee. Learning to generate long-term future via hierarchical prediction. In ICML, 2017b. +Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. In Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016. +Jacob Walker, Kenneth Marino, Abhinav Gupta, and M. Hebert. The pose knows: Video forecasting by generating pose futures. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 3352-3361, 2017. +Yunbo Wang, Mingsheng Long, Jianmin Wang, Zhifeng Gao, and Philip S Yu. Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems, 2017. + +Jiajun Wu, Erika Lu, Pushmeet Kohli, Bill Freeman, and Josh Tenenbaum. Learning to see physics via visual de-animation. In Advances in Neural Information Processing Systems, 2017. +Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, and Joshua B. Tenenbaum. CLEVRER: collision events for video representation and reasoning. In ICLR, 2020. +Yuan Yin, Vincent LE GUEN, Jérémie DONA, Emmanuel de Bezenac, Ibrahim Ayed, Nicolas THOME, and patrick gallinari. Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting. In International Conference on Learning Representations (ICLR), 2021. + +# Appendix + +# A FURTHER DETAILS ON DATASET GENERATION + +**Confounders in our setup are masses, which we discretize in $\{1,10\}$ .** For **BallsCF** and **CollisionCF**, we can also consider the continuous initial velocities of each object as confounders variables, since they have to be identified in **AB** to forecast **CD**. + +We simulate all trajectories associated with the various possible combinations of masses from the same initial condition. + +Do-interventions, however, depend on the task. For BlocktowerCF and BallsCF, do-interventions consist in (a) removing the top cube or a ball, or (b) shifting a cube/ball on the horizontal plane. In this case, for BlocktowerCF, we make sure that the cube does not move too far from the tower, in order to maintain contact. For CollisionCF, the do-interventions are restricted to shifting operations, since there are only two objects (a ball and a cylinder). It can consist of either a switch of the cylinder's orientation between vertical or horizontal, or a shift of the position of the moving object relative to the resting one in one of the three canonical directions $x$ , $y$ and $z$ . + +# A.1 ENFORCING THE IDENTIFIABILTY CONSTRAINT + +The identifiability and counterfactuality constraints described in section 3 are imposed numerically, i.e. we first sample and simulate trajectories with random parameters and then reject those that violate these constraints. + +As stated in section 3, an identifiable experiment guarantees that there is no pair $(z,z^{\prime})$ that gives the same trajectory AB but a different counterfactual outcome CD. Otherwise, there will be no way to choose between $z$ and $z^{\prime}$ only from looking at AB, thus no way to correctly forecast the counterfactual experiment. By enforcing this constraint, we make sure that there exists at least a set $\{z,z^{\prime},\ldots \}$ of confounders that give at the same time similar observed outcomes AB and similar counterfactual outcomes CD. + +In practice, there exists a finite set of possible variables $z_{i}$ , corresponding to every combination of masses for each object in the scene (masses take their value in $\{1, 10\}$ ). During generation, we submit each candidate experiment (AB, CD, $z$ ) to a test ensuring that the candidate is identifiable. Let $\psi(X_{0}, z)$ be the function that gives the trajectory of a system with initial condition $X_{0}$ and confounders $z$ . We simulate all possible trajectories $\psi(A, z_{i})$ and $\psi(C, z_{i})$ for every possible $z_{i}$ . If there exists $z' \neq z$ such that the experiment is not identifiable, the candidate is rejected. This constraint requires to simulate the trajectory of each experiment several times by modifying physical properties of the objects. + +Equalities in Definition 1 are relaxed by thresholding distances between trajectories. We reject a candidate experiment if there exists a $z'$ such that + +$$ +\sum_ {t = 0} ^ {T} \| \psi (A, z) - \psi (A, z ^ {\prime}) \| _ {2} < \varepsilon \text {a n d} \sum_ {t = 0} ^ {T} \| \psi (C, z) - \psi (C, z ^ {\prime}) \| _ {2} > \varepsilon . \tag {7} +$$ + +The choice of the threshold value $\varepsilon$ is critical, in particular for the identifiability constraint: + +- If the threshold is too high, all AB trajectories will be considered as equal, which results in acceptance of unidentifiable experiments. +- If the threshold is too low, all trajectories CD are considered equal. Again, this leads to mistakenly accepting unidentifiable experiments. + +There exists an optimal value for $\varepsilon$ , which allows to correctly reject unidentifiable experiences. To measure this optimal threshold, we generated a small instance of the BlocktowerCF dataset without constraining the experiments, i.e. trajectories can be unidentifiable and non-counterfactual. We then plot the percentage of rejected experiments in this unfiltered dataset against the threshold + +![](images/c105060773e9897f52f61b0413b53a26bbc50af42ee0d20e9dbf9d06edafba22.jpg) +Figure 7: Experimental tuning of the threshold parameter. We generate an unconstrained subset of BlocktowerCF and plot the percentage of identifiable experiments as function of the threshold $\varepsilon$ . + +![](images/cf8000a5c984541ba2721ad7fd07557c1a401ff6c32506e216fbd7b907ea0573.jpg) + +
without constraintwith constraint
Accuracy56%84%
Corrected Acc.58%91%
+ +(a) + +
FPS515253545
MSE (×10-2)4.583.973.743.823.93
+ +(b) + +Table 6: (a) Sanity check of the identifiability constraint in BlocktowerCF, which results in better estimation of cube masses. The corrected accuracy only considers those cubes for which changes in masses are consequential for the trajectory D. (b) MSE between ground truth 3D positions and predicted positions after 1 second, depending on the sampling rate of the trajectory. + +value (Fig. 7, left). We chose the threshold $\varepsilon = 100$ which optimizes discrimination and rejects the highest number of "unidentifiable" trajectories. + +To demonstrate importance of this, we train a recurrent graph network on BlocktowerCF to predict the cube masses from ground-truth state trajectories AB, including pose and velocities, see Fig. 8. It predicts each cube's mass by solving a binary classification task. We train this model on both BlocktowerCF and an alternative version of the scenario generated without the identifiability constraint. The results are shown in Table 6a. We are not aiming $100\%$ accuracy, and this problem remains difficult in a sense that the identifiability constraint ensures the identifiability of a set of confounder variables, while our sanity check tries to predict a unique $z$ . + +However, the addition of the identifiability constraint to the benchmark significantly improves the model's accuracy, which indicates that the property acts positively on the feasibility of Filtered-CoPhy. The corrected accuracy metric focuses solely on the critical cubes, i.e. those cubes whose masses directly define the trajectory CD. + +# A.2 ENFORCING THE COUNTERFACTUALITY CONSTRAINT + +Let $(\mathbf{AB},\mathbf{CD},z)$ be a candidate experiment, and $z^k$ be a combination of masses identical to $z$ except for the $k^{th}$ value. The counterfactuality constraint consists in checking that there exists at least one $k$ such that $\psi (C,z)\neq \psi (C,z^k)$ . To do so, we simulate $\psi (C,z^k)$ for all $k$ and measure the difference with the candidate trajectory $\psi (C,z)$ . Formally, we verify the existence of $k$ such that: + +$$ +\sum_ {t = 0} ^ {T} \| \psi (C, z ^ {k}) - \psi (C, z) \| _ {2} < \varepsilon . \tag {8} +$$ + +# A.3 ANALYZING TEMPORAL RESOLUTION + +We analyzed the choice of temporal frequency for the benchmark with another sanity check. We simulate a non-countercount factual dataset from BlocktowerCF where all cubes have equal masses. A + +![](images/9f451eb3163fc25345de5db3f0785c78e0a66aaf2642f9de2b319cf8c2bfeb22.jpg) +Figure 8: Impact of the choice of temporal resolution. Left: We check the identifiability constraint by training a model to predict the cube masses in BlocktowerCF from the observed trajectory AB. The model is a graph neural network followed by a gated recurrent unit. Right: We check the augmentation of the sampling rate by training an agent to forecast a 1 second-length trajectory from the states of the previous second. + +![](images/47be2132335f6d5283596e2988717a833d3aee17beae526ac34e786ac60d13e7.jpg) +Figure 9: Visual examples of the impact of temporal resolution on dynamical information for each task in Filtered-CoPhy. Black dots are sampled at 6 FPS while red dots are sampled at 25 FPS. + +recurrent graph network takes as input cube trajectories (poses and velocities) over a time interval of one second and predicts the rest of the trajectory over the following second. We vary the sampling frequency; for example, at 5 FPS, the model receives 5 measurements, and predicts the next 5 time steps, which correspond to a one second rollout in the future. Finally, we compare the error in 3D positions between the predictions and the GT on the last prediction. Results are shown in Table 6b. This check shows clearly that 25 FPS corresponds to the best trade-off between an accurate representation of the collision and the amount of training data. Fig. 9 shows a practical example of the effect of time resolution on dynamical information in a trajectory. + +# A.4 SIMULATION DETAILS + +We used Pybullet as physics engine to simulate Filtered-CoPhy. Each experiment is designed to respect the balance between good coverage of confounder combinations and counterfactuality and identifiability constraints described above. We generate the trajectories iteratively: + +1. We sample a combination of masses and other physical characteristics of the given experiment, such as stability of the tower, or object motion in CollisionCF, or if the do-operation consists in removing an object. This allows us to maintain a balance of confounder configurations. +2. Then we search for an initial configuration $\mathbf{A}$ . For BlocktowerCF, we make sure that this configuration is unstable to ensure identifiability. Then we simulate the trajectory $\mathbf{B}$ . + +![](images/deac3fa5cb8274277f3ddd1977a076f906e6368afe6efe17d41b5441f7d01627.jpg) +Figure 10: During the dataset generation process, we carefully balance combinations of masses, i.e. the confounders. For BlocktowerCF, we also guarantee that the proportion of stable CD towers is close to $50\%$ for each confounder configuration. + +3. We look for a valid do-operation such that identifiability and counterfactuality constraints are satisfied. If no valid do-operation is found after a fixed number of trials, we reject this experiment. +4. If a valid pair $(\mathbf{AB},\mathbf{CD})$ is found, we add the sample to the dataset. + +The trajectories were simulated with a sample time of 0.04 seconds. The video resolution is $448 \times 448$ and represents 6 seconds for BlocktowerCF and BallsCF, and 3 seconds for CollisionCF. We invite interested readers to look at our code for more details, such as do- operation sampling, or intrinsic camera parameters. Fig. 10 shows the confounder distribution in the three tasks. + +# B PERFORMANCE EVALUATION OF THE DE-RENDERING MODULE + +# B.1 IMAGE RECONSTRUCTION + +We evaluate the reconstruction performance of the de-rendering module in the reconstruction task. Note that there is trade-off between the reconstruction performance and the dynamic forecasting accuracy: a higher number of keypoints may lead to better reconstruction, but can hurt prediction performance, as the dynamic model is more difficult to learn. + +
# KeypointsN2N4N
BT-CFPSNR34.4035.4134.92
MSE Grad27.2421.3923.99
B-CFPSNR37.7637.0636.98
MSE Grad3.473.773.95
C-CFPSNR32.0035.4134.42
MSE Grad32.0012.5717.09
+ +Table 7: PSNR (dB) on the task of reconstructing the target from the source (both randomly sampled from Filtered-CoPhy), using 5 coefficients per keypoint. We vary the number of keypoints in our model. Here $N$ is the maximum number of the objects in the scene. + +Reconstruction error - We first investigate the impact of the number of keypoints in Table 7 by measuring the Peak Signal to Noise Ratio (PSNR) between the target image and its reconstruction. We vary the number of keypoints among multiples of $N$ , the maximum number of objects in the scene. Increasing the number + +of keypoints increases reconstruction quality (PSNR) up to a certain point, but results in degradation in forecasting performance. Furthermore, doubling the number of keypoints only slightly improves reconstruction accuracy. This tends to indicate that our additional coefficients are already sufficient to model finer-grained visual details. Table 3 in the main paper measures the impact of the number of keypoints and the presence of the additional appearance coefficients on the full pipeline including the dynamic model. Table 8 illustrates the impact of the number of keypoints and the additional appearance coefficient on the reconstruction performance alone. As we can see, the addition of the coefficient consistently improves PSNR for low numbers of keypoints (over 2 dB for $N$ keypoints). + +![](images/8e0b08bce3db23dd82c2f7b90f592d1ec70745a02107eb6429f5c25398f4b6eb.jpg) +Figure 11: Reconstructions produced by the de-rendering module. Our model correctly marks each object in the scene and achieves satisfactory reconstruction. + +The improvement is less visible for larger numbers of keypoints, since 3D visual details could be encoded via keypoints position, hence coefficients become less relevant. Visualizations are shown in Fig. 11. + +
CoefficientsN2N4N
XXX
BT-CFPSNR32.5334.4033.9735.4134.5734.92
MSE Grad41.8627.2435.2421.3928.0623.99
B-CFPSNR34.6237.7636.9437.0637.1536.98
MSE Grad6.223.474.163.774.073.95
C-CFPSNR30.6532.0033.8935.4135.6334.42
MSE Grad12.7832.0020.5912.5711.7217.09
+ +Table 8: Impact of the number of keypoints and the presence of additional appearance coefficient in the de-rendering module for pure image reconstruction (no dynamic model). We report PSNR (dB) and MSE on the image gradient. $N$ is the maximum number of the objects in the scene. The coefficients significantly improve the reconstruction on low number of keypoints. This table is related to table 3 in the main paper, which measures this impact on the full pipeline. + +# B.2 NAVIGATING THE LATENT COEFFICIENT MANIFOLD + +We evaluate the influence of the additional appearance coefficients on our de-rendering model by navigating its manifold. To do so, we sample a random pair $(X_{\mathrm{source}}, X_{\mathrm{target}})$ from an experiment in BlocktowerCF and compute the corresponding source features and target keypoints and coefficients. Then, we vary each component of the target keypoints and coefficients and observe the reconstructed image (fig. 12). We observed that the keypoints accurately control the position of the cube along both spatial axes. The rendering module does infer some hints on 3D shape information from the vertical position of the cube, exploiting a shortcut in learning. On the other hand, while not being supervised, the coefficients naturally learn to encode different orientations in space and distance from the camera. Interestingly, a form of disentanglement emerges. For example, coefficients $n^{\circ}1$ and 2 control rotation around the z-axis, and coefficient $n^{\circ}4$ models rotation around the y-axis. The last coefficient represents both the size of the cube and its presence in the image. + +![](images/f0d631ce70b9951aa3d776c17ed6101ee7705bc30b7f00e006a59801f0d5f66d.jpg) +Figure 12: Navigating the manifold of the latent coefficient representation. Each line corresponds to variations of one keypoint coordinate or coefficient and shows the effect on a single cube. + +# C COMPARISON WITH THE TRANSPORTER BASELINE + +# C.1 COMPARISON WITH OUR DE-RENDERING MODEL + +As described in section 4.1, the Transporter (Kulkarni et al., 2019) is a keypoint detection model somewhat close to our de-rendering module. It leverages the transport equation to compute a reconstruction vector: + +$$ +\hat {\Psi} _ {\text {t a r g e t}} = F _ {\text {s o u r c e}} \times \left(1 - K _ {\text {s o u r c e}}\right) \times \left(1 - K _ {\text {t a r g e t}}\right) + F _ {\text {t a r g e t}} \times K _ {\text {t a r g e t}}. \tag {9} +$$ + +This equation allows to transmit information from the input by two means: the 2D position of the keypoints $(K_{\mathrm{target}})$ and the dense visual features of the target $(F_{\mathrm{target}})$ . In comparison, our de- rendering solely relies on the keypoints from the target image and does not require a dense vector to be computed on the target to reconstruct the target. This makes the Transporter incomparable with our de-rendering module. We nevertheless compare the performances of the two models in Table 13, and provide visual examples in Fig. 13. Even though the two models are not comparable, as the Transporter uses additional information, our model still outperforms the Transporter for small numbers of keypoints. Interestingly, for higher numbers of keypoints Transporter tends to discover keypoints far from the object. We investigate this behavior in the following section, and show that this is actually a critical problem for learning causal reasoning on the discovered keypoints. + +# C.2 ANALYSIS OF TRANSPORTER'S BEHAVIOR + +The original version of the V-CDN model (Li et al., 2020) is based on Transporter (Kulkarni et al., 2019). We have already highlighted the fact that this model is not comparable with our task, as it requires not only the target keypoints $K_{\text{target}}$ but also a dense feature map $F_{\text{target}}$ , whose dynamics can hardly be learned due to its high dimensionality. More precisely, the transport equation (Eq. 9) allows to pass information from the target by two means: the 2D position of the keypoints ( $K_{\text{target}}$ ) and the dense feature map of the target ( $F_{\text{target}}$ ). The number of keypoints therefore becomes a highly sensible parameter, as the transporter can decide to preferably transfer information through the target features rather than through the keypoint locations. When the number of keypoints is low, they act as a bottleneck, and the model has to carefully discover them to reconstruct the image. + +
OursTransporter (not comparable)
# Keypoints48164816
BT-CF34.4035.4134.9234.1034.8839.20
B-CF37.7637.0636.9834.7534.7835.13
C-CF35.4134.4235.9832.6633.3934.47
+ +Table 9: PSNR (dB) on the task of reconstructing target from the source (both randomly sampled from Filtered-CoPhy), using 5 coefficients per keypoint. We vary the number of keypoints in both our model and the Transporter. Note that Transporter uses target features to reconstruct the image, hence it is not comparable with our model. + +![](images/4942ddbf445924cbb95683438ae8d2954efb07897ce064677adf75e396989d21.jpg) +Figure 13: Example of reconstructed image by our-derendering module and the Transporter. + +On the other hand, when we increase the number of keypoints, Transporter stops tracking objects in the scene and transfers visual information through the dense feature map, making the predicted keypoints unnecessary for image reconstruction, and therefore not representative of the dynamics. + +To illustrate our hypothesis, we set up the following experiment. Starting from a trained Transporter model, we fixed the source image to be $X_0$ (the first frame from the trajectory) during the evaluation step. Then, we compute features and keypoints on the target frame $X_t$ regularly sampled in time. We reconstruct the target image using the transport equation, but without updating the target keypoints. Practically, this consists in computing $\hat{\Psi}_{\text{target}}$ with Eq. (9) substituting $K_{\text{target}}$ for $K_{\text{source}}$ . + +Results are shown in Fig. 14. There is no dynamic forecasting involved in this figure, and the Transporter we used was trained in a regular way, we only change the transport equation on evaluation time. Even though the keypoint positions have been fixed, the Transporter manages to reconstruct a significant part of the images, which indicates that a part of the dynamics has been encoded in the dense feature map. + +In contrast, this issue does not arise from our de-rendering module, since our decoder solely relies on the target keypoints to reconstruct the image. Note that this is absolutely not contradictory with the claim in Li et al. (2020), since they do not evaluate V-CDN in pixel space. A rational choice of the number of keypoints leads to satisfactory performance, allowing V-CDN to accurately forecast the trajectory in keypoints space, and retrieve the hidden confounders on their dataset. + +# C.3 TEMPORAL INCONSISTENCY ISSUES + +Increasing the number of keypoints of the Transporter may lead to temporal inconsistency during the long-range reconstruction. For example, a keypoint that tracks the edge of a cube in the first + +![](images/934a9afdd5c26cea0076d47fb5553037b0b67a2badf38e92fbd69437597fd0a2.jpg) +Figure 14: We evaluate the Transporter with a varying number of keypoints to reconstruct images regularly sampled in a trajectory while having the target keypoints fixed. Even if the keypoints are not moving, the Transporter still manages to reconstruct a significant part of the image, which indicates that the keypoints are not fully responsible for encoding the dynamics of the scene. + +frame may target a face of this same cube in the future, since dynamics does not intervene in the keypoint discovery process. + +Our de-rendering directly addresses this through the usage of additional appearance coefficients, which allows us to limit the number of keypoints to the number of objects in the scene, effectively alleviating the consistency issue. Fig. 15 illustrates this phenomenon by plotting the discovered keypoint locations forward in time, as well as the 2D location of the center of mass of each object. Note that the Transporter suffers from the temporal inconsistency issue with numbers of keypoints as low as 4 (green cube). In contrast, our model manages to solve the problem and accurately tracks the centers of mass, even though they were never supervised. + +# D DETAILS OF MODEL ARCHITECTURES + +# D.1 DE-RENDERING MODULE + +We call a "block" a 2D convolutional layer followed by a 2D batch norm layer and ReLU activation. The exact architecture of each part of the encoder is described in Table 10a. The decoder hyperparameters are described in Table 10b. + +Dense feature map estimator $\mathcal{F}$ - We compute the feature vector from $\mathbf{X}_{\mathrm{source}}$ by applying a convolutional network $\mathcal{F}$ on the output of the common CNN of the encoder. This produces the source feature vector $\mathbf{F}_{\mathrm{source}}$ of shape (batch, 16, 28, 28). + +Keypoint detector $\kappa$ - The convolutional network $\kappa$ outputs a set of 2D heatmaps of shape (batch, $K$ , 28, 28), where $K$ is the desired number of keypoints. We apply a spatial softmax function on the two last dimensions, then we extract a pair of coordinates on each heatmap by looking for the location of the maximum, which gives us $\mathbf{K}_{\mathrm{target}}$ of shape (batch, $K$ , 2). + +![](images/05f33807d0f78175c039b716c07c932662c80d53dbce3ee1084d4707beea9e8f.jpg) +Figure 15: Temporal inconsistency in long-range reconstruction. We show the keypoints discovered on images taken from different time steps (black dots). We also compute the 2D location of the center of mass of each object in the scene (white dots). Our de-rendering module accurately tracks the centers of mass, which have never been supervised. + +**Coefficient estimator $\mathcal{C}$ - We obtain the coefficient by applying a third convolutional network $\mathcal{C}$ to the output of the common encoder CNN, which again results in a set of 2D vectors of shape (batch, $K, 28, 28$ ). These vectors are flattened channel-wise and provide a tensor of shape (batch, $K, 28 \times 28$ ) fed to an MLP (see Table 10a for the exact architecture) that estimates the coefficients $\mathbf{C}_{\mathrm{target}}$ of shape (batch, $K, C + 1$ ). + +Gaussian mapping $\mathcal{G}$ - The keypoint vector $\mathbf{K}_{\mathrm{target}}$ is mapped to a 2D vector through a Gaussian mapping process: + +$$ +\mathcal {G} (\mathbf {k}) (x, y) = \exp \left(- \frac {\left(x - \mathbf {k} _ {x}\right) ^ {2} + \left(y - \mathbf {k} _ {y}\right) ^ {2}}{\sigma^ {2}}\right), \tag {10} +$$ + +where $\mathcal{G}(\mathbf{k})\in \mathbb{R}^{28\times 28}$ is the Gaussian mapping of the keypoint $\mathbf{k} = [\mathbf{k}_x\mathbf{k}_y]$ . We deform these Gaussian mappings by applying convolutions with filters $\mathbf{H}_i$ controlled by the coefficients $\mathbf{c}_k^i$ . + +The filters from $\mathbf{H}$ are $5\times 5$ kernels that elongate the Gaussian in a specific direction. Practically, we obtain the filter $\mathbf{H}_i$ by drawing a line crossing the center of the kernel and with a slope angle of $i\frac{\pi}{C}$ where $C$ is the number of coefficients. We then apply a 2D convolution: + +$$ +\mathbf {G} _ {k} ^ {i} = \mathbf {c} _ {k} ^ {C + 1} \mathbf {c} _ {k} ^ {i} \left(\mathcal {G} \left(\mathbf {k} _ {k}\right) * \mathbf {H} _ {i}\right). \tag {11} +$$ + +Note that we also compute a supplementary coefficient $\alpha_{k}^{C + 1}$ used as a gate on the keypoints. By setting this coefficient to zero, the de-rendering module can de-activate a keypoint (which is redundant with deactivating the full set of coefficients for this keypoint). + +Refiner $\mathcal{R}$ - To reconstruct the target image, we channel-wise stack feature vectors from the source with the constructed filters and feed them to the decoder CNN $\mathcal{R}$ (Table 10b). + +We trained the de-rendering module on pairs of images $(\mathbf{X}_{\mathrm{source}},\mathbf{X}_{\mathrm{target}})$ randomly sampled from sequences $\mathbf{D}$ . For a given sequence $\mathbf{D}$ , we take $T - 25$ first frames of the trajectory as a source (where $T$ is the number of frames in the video). The last 25 frames are used as a target. For evaluation, we take the $25^{th}$ frame as the source, and the $50^{th}$ frame as the target. We use Adam optimizer with a learning rate of $10^{-3}$ , $\gamma_{1} = 10^{4}$ and $\gamma_{2} = 10^{-1}$ to minimize equation 4. + +
CNN
Modulein ch.out ch.kernelstridepad.
1Block332713
23232311
33264321
46464311
564128321
F()
1Block12816311
K()
1Block128128311
2Conv2d128K311
3Softplus
C()
1Block128K311
2Flatten
Moduleinout
3Linear+ReLU7842048
4Linear+ReLU20481024
5Linear+ReLU1024512
6Linear+ReLU512C
7Sigmoid
+ +(a) Encoder architecture + +
R()
in ch.out ch.kernelstridepad.
1Block16+K×C128311
2128128311
312864311
4UpSamplingBilinear2d(2)
5Block6464311
66432311
7UpSamplingBilinear2d(2)
8Block3232311
93232711
10Conv2d323111
11TanH
+ +(b) Decoder architecture + +Table 10: Architectural details of the de-rendering module. + +# D.2 CoDY + +We describe the architectural choices made in CoDy. Let + +$$ +\mathbf {s} (t) = \left[ \mathbf {k} _ {k} \mathbf {c} _ {k} ^ {1} \dots \mathbf {c} _ {k} ^ {C + 1} \dot {\mathbf {k}} _ {k} \dot {\mathbf {c}} _ {k} ^ {1} \dots \dot {\mathbf {c}} _ {k} ^ {C + 1} \right] _ {k = 1.. K} \tag {12} +$$ + +be the state representation of an image $\mathbf{X}_t$ , composed of the $K$ keypoints 2D coordinates with their $C + 1$ coefficients. The time derivative of each component of the state is computed via an implicit Euler derivation scheme $\dot{\mathbf{k}}(t) = \mathbf{k}(t) - \mathbf{k}(t - 1)$ . We use a subscript notation to distinguish the keypoints from AB and CD. + +CF estimator - The latent representation of the confounders is discovered from $\mathbf{s}^{\mathbf{AB}}$ . The graph neural network from this module implements the message passing function $f()$ and the aggregation function $g()$ (see equation 5) by an MLP with 3 hidden layers of 64 neurons and ReLU activation unit. The resulting nodes embeddings $\mathbf{h}^{\mathbf{AB}}(t) = \mathcal{G}\mathcal{N}(\mathbf{s}^{\mathbf{AB}}(t))$ belong to $\mathbb{R}^{128}$ . We then apply a gated recurrent unit with 2 layers and a hidden vector of size 32 to each node in $\mathbf{h}^{\mathbf{AB}}(t)$ (sharing parameters between nodes). The last hidden vector is used as the latent representation of the confounders $u_{k}$ . + +State encoder-decoder – The state encoder is modeled as a $\mathcal{G}\mathcal{N}$ where the message passing function and the aggregation function are MLPs with one hidden layer of 32 units. The encoded state $\sigma^{\mathrm{CD}} = \mathcal{E}(s^{\mathrm{CD}})$ lies in $\mathbb{R}^{256}$ . We perform dynamical prediction in this $\sigma$ space, and then project back the forecasting in the keypoint space using a decoder. The decoder $\Delta(\sigma(t))$ first applies a shared GRU with one layer and a hidden vector size of 256 to each keypoint $\sigma_k(t)$ , followed by a graph neural network with the same structure as the state encoder. + +Dynamic system - Our dynamic system forecasts the future state $\hat{\sigma}(t + 1)$ from the current estimation $\hat{\sigma}(t)$ and the confounders $\mathbf{u} = [\mathbf{u}_1 \dots \mathbf{u}_K]$ . It first applies a graph neural network to the concatenated vector $[\hat{\sigma}(t) \mathbf{u}]$ . The message passing function $f$ and the aggregation function $g$ are MLPs with 3 hidden layers of 64 neurons and ReLU activation function. The resulting nodes embeddings $\mathcal{GN}((\sigma)(t))$ belong to $\mathbb{R}^{64}$ and are fed to a GRU sharing weights among each node with + +2 layers and a hidden vector of size 64. This GRU updates the hidden vector $\mathbf{v}^{\mathbf{CD}}(t) = [\mathbf{v}_1\ldots \mathbf{v}_K]$ , that is then used to compute a displacement vector with a linear transformation: + +$$ +\hat {\boldsymbol {\sigma}} _ {k} (t + 1) = \boldsymbol {\sigma} _ {k} (t) + \mathbf {W} \mathbf {v} _ {k} (t) + \mathbf {b}. \tag {13} +$$ + +CoDy is trained using Adam to minimize equation 6 (learning rate $10^{-4}$ , $\gamma_{3} = 1$ ). We train each CoDy instance by providing it with fixed keypoints states $\mathbf{s}^{\mathbf{AB}}(t)$ and the initial condition $\mathbf{s}^{\mathbf{CD}}(t = 0)$ computed by our trained de-rendering module. CoDy first computes the latent confounder representation $\mathbf{u}_k$ , and then projects the initial condition into the latent dynamic space $\pmb{\sigma}(t = 0) = \mathcal{E}(\mathbf{s}^{\mathbf{CD}}(t = 0))$ . We apply the dynamic model multiple times in order to recursively forecast $T$ time steps from CD. We then apply the decoder $\Delta(\hat{\pmb{\sigma}}(t))$ to compute the trajectory in the keypoint space. + +# E ADDITIONAL QUANTITATIVE EVALUATION + +# E.1 MULTI-OBJECT TRACKING METRICS + +When the number of keypoints matches the number of objects in the scene, the keypoint detector naturally and in an unsupervised manner places keypoints near the center of mass of each object (see Figure 15). Leveraging this emerged property, we provide additional empirical demonstration of the accuracy of our model by computing classical Multi-Object Tracking (MOT) metrics. In particular, we computed the Multi-Object Tracking Precision (MOTP) and the Multi-Object Tracking (MOTA) as described in Bernardin & Stiefelhagen. + +- MOTA requires to compute the number of missed objects (i.e. not tracked by a keypoints) and the number of false positives (i.e. keypoints that do not represent an actual object). MOTA takes values in $[-1, 1]$ , where 1 represents perfect tracking: + +$$ +\mathrm {M O T A} = 1 - \frac {\sum_ {t} m _ {t} + f _ {t} + s _ {t}}{\sum_ {t} g _ {t}}, \tag {14} +$$ + +where $m_t$ is the number of missed objects at time $t$ , $f_t$ is the number of false positives at time $t$ , $s_t$ is the number of swaps at time $t$ , and $g_t$ is the number of objects at time $t$ . + +- MOTP is a measurement of the distance between the keypoints and the ground-truth centers of mass conditioned on the pairing process: + +$$ +\mathrm {M O T P} = \frac {\sum_ {i , t} d _ {t} ^ {i}}{\sum_ {t} c _ {t}}, \tag {15} +$$ + +where $c_{t}$ is the number of accurately tracked objects at time $t$ and $d_{t}^{i}$ is the distance between the keypoint and the center of mass of the $i^{th}$ association {keypoints+center of mass}. + +Note that these metrics are related: low MOTP indicates that the tracked objects are tracked precisely, and low MOTA indicates that many objects are missed. Thus, to be efficient, a model needs to achieve both low MOTP and high MOTA. + +We also reported the performances of CoPhyNet (Baradel et al., 2020) that predicts counterfactual outcomes in euclidian space using the ground-truth 3D space. As it uses GT object positions during training, it is not comparable and should be considered as a soft upper bound of our method. We present our results in Table 11. This confirms the superiority of our method over UV-CDN in keypoint space. The upper bound CoPhyNet takes advantage of the non-ambiguous 3D representation modeled by the ground-truth state of the object of the scene. + +Our method also outperforms CoPhyNet on the ballsCF task, probably due to two phenomena. First, ballsCF is the only 2D task of FilteredCoPhy. Thus, CoPhyNet does not have an advantage of using ground-truth 3D positions. Second, the state-encoder in CoDy projects the 2D position of each sphere in a space where the dynamics is easier to learn, probably by breaking the non-linearity of collisions. + +
OursUV-CDNCoPhyNet (not comparable)
BT-CFMOTA ↑0.460.160.44
MOTP ↓3.344.510.72
B-CFMOTA ↑-0.07-0.73-0.16
MOTP ↓4.645.835.10
C-CFMOTA ↑-0.14-0.190.21
MOTP ↓6.356.354.37
+ +Table 11: MOT metrics for different methods. While not comparable, we report the CoPhyNet performance as a soft upper bound. Our method and UV-CDN use one keypoint per object. MOTA $\uparrow$ : higher is better; MOTP $\downarrow$ : lower is better; + +![](images/b8a84f91db49644be763177ea7ee0c546a32134cac6409874c3ae04391dccd1f.jpg) +Figure 16: Effect of the do-operation on the quality of the forecasted video. (left) our method generalizes well to a wide range of "Move" operation amplitudes. (right) We observe a difference of 3dB in favor of the Move do-operation, which is unsurprising, as it is the least disturbing intervention + +# E.2 IMPACT OF THE DO-OPERATIONS + +We also measure the impact of the do-operation types on the video forecasting. Fig. 16 (left) is obtained by computing PSNR for each example of the training set and reporting the result on a 2D graph, depending on the amplitude of the displacement that characterizes the do-operation. We applied the same method to obtain Fig.16 (right) that focuses on the type of do-operation, that is moving, removing or rotating an object. These figures are computed using the 2N keypoints models. + +Our method generalizes well across different do-operations, including both the type of the operation, and the amplitude. A key to this success is the careful design of the dataset (balanced with respect to the types of do-operations), and a reasonable representation (our set of keypoints and coefficients) able to detect and model each do-operation from images. + +# F EXPERIMENTS ON REAL-WORLD DATA + +Our contributions are focused on the discovery of causality in physics through counterfactual reasoning. We designed our model in order to solve the new benchmark and provided empirical evidence that our method is well suited for modeling rigid-body physics and counterfactual reasoning. The following section aims to demonstrate that our approach can also be extended to a real-world dataset. We provide qualitative results obtained on a derivative of BlocktowerCF using real cubes tower (Lerer et al., 2016). + +![](images/0a99b3a8c117390db774984de8e46a8714ab649c73f69352590363f5e070baa8.jpg) +Figure 17: We evaluate our method on a real-world dataset Blocktower IRL. After fine-tuning, CoDy manages to accurately forecast future frames from real videos. + +We refer to this dataset as Blocktower IRL. It is composed of 516 videos of wooden blocks stacked in a stable or unstable manner. The amount of cubes in a tower varies from 2 to 4. We aim to predict the dynamics of the tower in pixel space. This is highly related with our task BlocktowerCF (which inspired by the seminal work from (Lerer et al., 2016)) with three main differences: (1) the dataset shows real cube towers, (2) the problem is not counterfactual, i.e. every cube has the same mass and (3) the dataset contains only few videos. + +To cope with the lack of data, we exploit our pre-trained models on BlocktowerCF and fine-tune on Blocktower IRL. The adaptation of the de-rendering module is straightforward: we choose the 4 keypoints-5 coefficients configuration and train the module for image reconstruction after loading the weights from previous training on our simulated task. CoDy, on the other hand, requires careful tuning to preserve the learned regularities from BlocktowerCF and prevent over-fitting. Since Blocktower IRL is not counterfactual, we de-activate the confounder estimator and set $u_{k}$ to vectors of ones. We also freeze the weights of the last layers of the MLPs in the dynamic model. + +To the best of our knowledge, we are the first to use this dataset for video prediction. (Lerer et al., 2016) and Wu et al. (2017) leverage the video for stability prediction but actual trajectory forecasting was not the main objective. To quantitatively evaluate our method, we predict 20 frames in the future from a single image sampled in the trajectory. We measured an average PSNR of $26.27\mathrm{dB}$ , which is of the same order of magnitude compared to the results obtained in simulation. Figure 17 provides visual example of the output. + +# G QUALITATIVE EVALUATION: MORE VISUAL EXAMPLES + +More qualitative results produced by our model on different tasks from our datasets are given below. + +![](images/fde4f8e6af4fc68f5647e529549fafb5102b0ff92b6fc2928311960858283e6e.jpg) +Figure 18: Qualitative performance on the BlocktowerCF (BT-CF) benchmark. + +![](images/fde538c1b846c7a0a9b50d5d65249ac15b455382f12e368121f7ab6236902796.jpg) +Figure 19: Qualitative performance on the BallsscF (B-CF) benchmark. + +![](images/834da41c3320aadfe4459d4cefad08cadd7dba42cb666052ae45e6eb9c0c5b52.jpg) +Figure 20: Qualitative performance on the CollisionCF (C-CF) benchmark. \ No newline at end of file diff --git a/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/images.zip b/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..652754e9658b652bae53d0f65a393d81d05290e2 --- /dev/null +++ b/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af0e4cb41a17526235e8c6f5dcf8bdc4e48465598e3c7a6cbe4ea34e5147c385 +size 2266662 diff --git a/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/layout.json b/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..76263cee0eb8d4dcc505f70fe9b9df3140267f55 --- /dev/null +++ b/filteredcophyunsupervisedlearningofcounterfactualphysicsinpixelspace/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:14070d1e4f10e0bd700d3965552775fe2562a2f46451be71b73c56bcf4446434 +size 801309 diff --git a/finetunedlanguagemodelsarezeroshotlearners/95809f8d-c9de-4b1f-bc2a-df06cd63751e_content_list.json b/finetunedlanguagemodelsarezeroshotlearners/95809f8d-c9de-4b1f-bc2a-df06cd63751e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ee8733b1f7b88aac4657bea58f8074ec6ddb8d1d --- /dev/null +++ b/finetunedlanguagemodelsarezeroshotlearners/95809f8d-c9de-4b1f-bc2a-df06cd63751e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:33081f085fcdb9714045032170ad6a75ba6c30a4d33a115cabda733b21bba0e9 +size 254953 diff --git a/finetunedlanguagemodelsarezeroshotlearners/95809f8d-c9de-4b1f-bc2a-df06cd63751e_model.json b/finetunedlanguagemodelsarezeroshotlearners/95809f8d-c9de-4b1f-bc2a-df06cd63751e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..8eee47acdad2e767d537222a7eb2e79086a38fd5 --- /dev/null +++ b/finetunedlanguagemodelsarezeroshotlearners/95809f8d-c9de-4b1f-bc2a-df06cd63751e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:64150484bd98a2049e074b9d3bdc2192b68f4057b4cfe253f96274f54d99bebd +size 340619 diff --git a/finetunedlanguagemodelsarezeroshotlearners/95809f8d-c9de-4b1f-bc2a-df06cd63751e_origin.pdf b/finetunedlanguagemodelsarezeroshotlearners/95809f8d-c9de-4b1f-bc2a-df06cd63751e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2a7bdd1cd15891884abf75293f1be0900e815d9e --- /dev/null +++ b/finetunedlanguagemodelsarezeroshotlearners/95809f8d-c9de-4b1f-bc2a-df06cd63751e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c9520d528bed435c6f96bcef4518e222777d7a874bf8438b0dbcf039162a7e49 +size 1486536 diff --git a/finetunedlanguagemodelsarezeroshotlearners/full.md b/finetunedlanguagemodelsarezeroshotlearners/full.md new file mode 100644 index 0000000000000000000000000000000000000000..7872b0cc6f32722bdcfbabea15e600526e952b38 --- /dev/null +++ b/finetunedlanguagemodelsarezeroshotlearners/full.md @@ -0,0 +1,1125 @@ +# FINETUNED LANGUAGE MODELS ARE ZERO-SHOT LEARNERS + +Jason Wei*, Maarten Bosma*, Vincent Y. Zhao*, Kelvin Guu*, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le + +Google Research + +# ABSTRACT + +This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning—finetuning language models on a collection of datasets described via instructions—substantially improves zero-shot performance on unseen tasks. + +We take a 137B parameter pretrained language model and instruction tune it on over 60 NLP datasets verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 datasets that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of finetuning datasets, model scale, and natural language instructions are key to the success of instruction tuning. + +![](images/5233cd850a9c67e89f901484bb05989321eba5d58fbeda2103614d3dadc98da6.jpg) +Finetune on many tasks ("instruction-tuning") + +![](images/a1361ec181c617ede39e3f1cc8156e59bad8b5799864e0533081ec01b618e3cc.jpg) +GPT-3 175B zero shot + +![](images/c23e08d3993465bc348abddbfc7f942ecedb45a8a5a82641c244453e5962f2c8.jpg) + +![](images/83bbe323afb08ebd4a0f1b9ee05f2162b9ecf65f49ed6f4aa27b9d85d07ea50d.jpg) +GPT-3 175B few-shot +FLAN 137B zero-shot + +![](images/f5d990330088ce9d336657413b305894738d35e34eff78743dd12a2db9a53996.jpg) +Performance on unseen task types +Natural language inference +Figure 1: Top: overview of instruction tuning and FLAN. Instruction tuning finetunes a pretrained language model on a mixture of tasks phrased as instructions. At inference time, we evaluate on an unseen task type; for instance, we could evaluate the model on natural language inference (NLI) when no NLI tasks were seen during instruction tuning. Bottom: performance of zero-shot FLAN, compared with zero-shot and few-shot GPT-3, on three unseen task types where instruction tuning improved performance substantially out of ten we evaluate. NLI datasets: ANLI R1-R3, CB, RTE. Reading comprehension datasets: BoolQ, MultiRC, OBQA. Closed-book QA datasets: ARC-easy, ARC-challenge, NQ, TriviaQA. + +![](images/4199b8b9e8251dfb4f0a3cf7557031f5ea1c9ae3c036bec89f8896baae107866.jpg) +Reading Comprehension + +![](images/bb5a064e799c573b4bf2b50d66c22af7b4313d23f784dcf4e4ef3ca10f9a8676.jpg) +Closed-Book QA + +# 1 INTRODUCTION + +Language models (LMs) at scale, such as GPT-3 (Brown et al., 2020), have been shown to perform few-shot learning remarkably well. They are less successful at zero-shot learning, however. For example, GPT-3's zero-shot performance is much worse than few-shot performance on tasks such as reading comprehension, question answering, and natural language inference. One potential reason is that, without few-shot exemplars, it is harder for models to perform well on prompts that are not similar to the format of the pretraining data. + +In this paper, we explore a simple method to improve the zero-shot performance of large language models, which would expand their reach to a broader audience. We leverage the intuition that NLP tasks can be described via natural language instructions, such as "Is the sentiment of this movie review positive or negative?" or "Translate 'how are you' into Chinese." We take a pretrained language model of 137B parameters and perform instruction tuning—finetuning the model on a mixture of more than 60 NLP datasets expressed via natural language instructions. We refer to this resulting model as FLAN, for Finetuned Language Net. + +To evaluate the zero-shot performance of FLAN on unseen tasks, we group NLP datasets into clusters based on their task types and hold out each cluster for evaluation while instruction tuning FLAN on all other clusters. For example, as shown in Figure 1, to evaluate FLAN's ability to perform natural language inference, we instruction tune the model on a range of other NLP tasks such as commonsense reasoning, translation, and sentiment analysis. As this setup ensures that FLAN has not seen any natural language inference tasks in instruction tuning, we then evaluate its ability to perform zero-shot natural language inference. + +Our evaluations show that FLAN substantially improves the zero-shot performance of the base 137B-parameter model. FLAN's zero-shot also outperforms 175B-parameter GPT-3's zero-shot on 20 of 25 datasets that we evaluate, and even outperforms GPT-3's few-shot by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. In ablation studies, we find that increasing the number of task clusters in instruction tuning improves performance on unseen tasks and that the benefits of instruction tuning emerge only with sufficient model scale. + +Instruction tuning is a simple method that, as depicted in Figure 2, combines appealing aspects of both the pretrain–finetune and prompting paradigms by using supervision via finetuning to improve language model's responses to inference-time text interactions. Our empirical results demonstrate promising abilities of language models to perform tasks described purely via instructions. Source code for loading the instruction tuning dataset used for FLAN is publicly available at https://github.com/google-research/flan. + +![](images/95abfcdfc808b7356bdd78c8d6811f54bac3575770810a7f3fa0509981b550eb.jpg) + +![](images/7a16f118b1274f265bfc3494068e72bd746a373fe42f979eb286eb12e32e8353.jpg) + +![](images/1eec5379d04a1face2ca2e0c6980bf32fb93a51b5951c323ab4c7fef583ec731.jpg) +Figure 2: Comparing instruction tuning with pretrain–finetune and prompting. + +# 2 FLAN: INSTRUCTION TUNING IMPROVES ZERO-SHOT LEARNING + +The motivation of instruction tuning is to improve the ability of language models to respond to NLP instructions. The idea is that by using supervision to teach an LM to perform tasks described via instructions, the LM will learn to follow instructions and do so even for unseen tasks. To evaluate performance on unseen tasks, we group datasets into clusters by task type and hold out each task cluster for evaluation while instruction tuning on all remaining clusters. + +# 2.1 TASKS &TEMPLATES + +As creating an instruction tuning dataset with many tasks from scratch would be resource-intensive, we transform existing datasets from the research community into an instructional format. We aggregate 62 text datasets that are publicly available on TensorFlow Datasets, including both language understanding and language generation tasks, into a single mixture. Figure 3 shows these datasets—each dataset is categorized into one of twelve task clusters, for which datasets in a given cluster are of the same task type. Descriptions, sizes, and examples of each dataset are shown in Appendix G. + +![](images/19801c03d447a0ffaf96cf3d1bf0d8346d5c6fadbfb8557510d693c41e39b262.jpg) +Figure 3: Datasets and task clusters used in this paper (NLU tasks in blue; NLG tasks in teal). + +For each dataset, we manually compose ten unique templates that use natural language instructions to describe the task for that dataset. While most of the ten templates describe the original task, to increase diversity, for each dataset we also include up to three templates that "turned the task around," (e.g., for sentiment classification we include templates asking to generate a movie review). We then instruction tune a pretrained language model on the mixture of all datasets, with examples in each dataset formatted via a randomly selected instruction template for that dataset. Figure 4 shows multiple instruction templates for a natural language inference dataset. + +![](images/d3c8f27b6b0ee4b78be13898b678a71b267157b9451a390fdfd92e7e510294ee.jpg) +Figure 4: Multiple instruction templates describing a natural language inference task. + +# 2.2 EVALUATION SPLITS + +We are interested in how FLAN performs on tasks not seen in instruction tuning, and so it is crucial to define what counts as an unseen task. Whereas some prior work defines unseen tasks by disallowing the same dataset to appear in training, we use a more conservative definition that leverages the task clusters from Figure 3. In this work, we only consider dataset $\mathcal{D}$ unseen at evaluation time if no datasets from any task clusters that $\mathcal{D}$ belongs to were seen during instruction tuning. For instance, if $\mathcal{D}$ is an entailment task, then no entailment datasets appeared in instruction tuning, and we instruction-tuned on all other clusters. Hence, to evaluate zero-shot FLAN on $c$ task clusters, we instruction tune $c$ models, where each model holds out a different task cluster for evaluation. + +# 2.3 CLASSIFICATION WITH OPTIONS + +The output space for a given task is either one of several classes (classification) or free text (generation). As FLAN is an instruction-tuned version of a decoder-only language model, it naturally responds in free text, and so no further modifications are needed for generation tasks. + +For classification tasks, prior work (Brown et al., 2020) used a rank classification approach where, for example, only two outputs ("yes" and "no") are considered and the higher probability one is taken as the model's prediction. Though this procedure is logically sound, it is imperfect in that the probability mass for answers may have an undesired distribution among ways of saying each answer (e.g., a large number of alternative ways of saying "yes" may lower the probability mass assigned to "yes"). Therefore, we include an options suffix, in which we append the token OPTIONS to the end of a classification task along with a list of the output classes for that task. This makes the model aware of which choices are desired when responding to classification tasks. Example use of options is shown in the NLI and commonsense examples in Figure 1. + +# 2.4 TRAINING DETAILS + +Model architecture and pretraining. In our experiments, we use LaMDA-PT, a dense left-to-right, decoder-only transformer language model of 137B parameters (Thoppilan et al., 2022). This model is pretrained on a collection of web documents (including those with computer code), dialog data, and Wikipedia, tokenized into 2.49T BPE tokens with a 32k vocabulary using the SentencePiece library (Kudo & Richardson, 2018). Around $10\%$ of the pretraining data was non-English. Note that LaMDA-PT only has language model pretraining (c.f. LaMDA, which was finetuned for dialog). + +Instruction tuning procedure. FLAN is the instruction-tuned version of LaMDA-PT. Our instruction tuning pipeline mixes all datasets and randomly samples from each dataset. To balance the different sizes of datasets, we limit the number of training examples per dataset to 30k and follow the examples-proportional mixing scheme (Raffel et al., 2020) with a mixing rate maximum of $3\mathrm{k}$ . We finetune all models for 30k gradient steps with a batch size of 8,192 tokens using the Adafactor Optimizer (Shazeer & Stern, 2018) with a learning rate of 3e-5. The input and target sequence lengths used in finetuning are 1024 and 256, respectively. We use packing (Raffel et al., 2020) to combine multiple training examples into a single sequence, separating inputs from targets using a special EOS token. This instruction tuning takes around 60 hours on a TPUv3 with 128 cores. For all evaluations, we report results on the final checkpoint trained for 30k steps. + +# 3 RESULTS + +We evaluate FLAN on natural language inference, reading comprehension, closed-book QA, translation, commonsense reasoning, coreference resolution, and struct-to-text. As described in §2.2, we evaluate on unseen tasks by grouping datasets into task clusters and holding out each cluster for evaluation while instruction tuning on all remaining clusters (i.e., each evaluation task cluster uses a different checkpoint). For each dataset, we evaluate the mean of performance on all templates, which proxies the expected performance given a typical natural language instruction. As a dev set is sometimes available for manual prompt engineering (Brown et al., 2020), for each dataset we also obtain the test set performance using the template with the best dev set performance. + +For comparison, we report zero and few-shot results for LaMDA-PT using the same prompts as GPT-3 (as LaMDA-PT is not suitable for natural instructions without instruction tuning). This baseline provides the most direct ablation of how much instruction tuning helps. Instruction tuning significantly improves LaMDA-PT on most datasets. + +We also show the zero-shot performances of GPT-3 175B (Brown et al., 2020) and GLaM 64B/64E (Du et al., 2021), as reported in their respective papers. With the best dev template, zero-shot FLAN outperforms zero-shot GPT-3 on 20 of 25 datasets and even surpasses GPT-3's few-shot performance on 10 datasets. With the best dev-template, zero-shot FLAN outperforms zero-shot GLaM on 13 of 19 available datasets and one-shot GLaM on 11 of 19 datasets. + +Overall, we observe that instruction tuning is very effective on tasks naturally verbalized as instructions (e.g., NLI, QA, translation, struct-to-text) and is less effective on tasks directly formulated as language modeling, where instructions would be largely redundant (e.g., commonsense reasoning and coreference resolution tasks that are formatted as finishing an incomplete sentence or paragraph). Results on natural language inference, reading comprehension, closed-book QA, and translation are summarized in Figure 5 and described below. + +![](images/8a88d1a653f14c67f0c6700a36357e638299de65fd1f452ed03a1227f3d6c47a.jpg) +Figure 5: Zero-shot performance of FLAN compared to LaMDA-PT 137B, GPT-3 175B, and GLaM 64B/64E on natural language inference, reading comprehension, closed-book QA, and translation. Performance of FLAN is the mean of up to 10 instructional templates per task. Supervised models were either T5, BERT, or translation models (specified in Table 2 and Table 1 in the Appendix). + +Natural language inference (NLI). On five NLI datasets, where a model must determine whether a hypothesis is true given some premise, FLAN outperforms all baselines by a large margin. As noted by Brown et al. (2020), perhaps one reason why GPT-3 struggles with NLI is that NLI examples are unlikely to have appeared naturally in an unsupervised training set and are thus awkwardly phrased as a continuation of a sentence. For FLAN, we phrase NLI as the more natural question “Does mean that ?”, achieving much higher performance. + +Reading comprehension. On reading comprehension, where models are asked to answer a question about a provided passage, FLAN outperforms baselines for MultiRC (Khashabi et al., 2018) and OBQA (Mihaylov et al., 2018). On BoolQ (Clark et al., 2019a), FLAN outperforms GPT-3 by a large margin, though LaMDA-PT already achieves high performance on BoolQ. + +Closed-book QA. For closed-book QA, which asks models to answer questions about the world without access to specific information containing the answer, FLAN outperforms GPT-3 on all four datasets. Compared to GLaM, FLAN has better performance on ARC-e and ARC-c (Clark et al., 2018), and slightly lower performance on NQ (Lee et al., 2019; Kwiatkowski et al., 2019) and TQA (Joshi et al., 2017). + +Translation. Similar to GPT-3, the training data for LaMDA-PT is around $90\%$ English and includes some text in other languages that was not specifically used to train the model to perform machine translation. We also evaluate FLAN's performance on machine translation for the three datasets evaluated in the GPT-3 paper: French-English from WMT'14 (Bojar et al., 2014), and German + +English and Romanian-English from WMT'16 (Bojar et al., 2016). Compared with GPT-3, FLAN outperforms zero-shot GPT-3 for all six evaluations, though it underperforms few-shot GPT-3 in most cases. Similar to GPT-3, FLAN shows strong results for translating into English and compares favorably against supervised translation baselines. Translating from English into other languages, however, was relatively weaker, as might be expected given that FLAN uses an English sentencepiece tokenizer and that the majority of pretraining data is English. + +Additional tasks. Although we see strong results for the above task clusters, one limitation with instruction tuning is that it does not improve performance for many language modeling tasks (e.g., commonsense reasoning or coreference resolution tasks formulated as sentence completions). For seven commonsense reasoning and coreference resolution tasks (see Table 2 in the Appendix), FLAN only outperforms LaMDA-PT on three of the seven tasks. This negative result indicates that when the downstream task is the same as the original language modeling pre-training objective (i.e., in cases where instructions are largely redundant), instruction tuning is not useful. Finally, we report results for sentiment analysis, paraphrase detection, and struct-to-text, as well as additional datasets for which GPT-3 results are not available, in Table 2 and Table 1 in the Appendix. Generally, zero-shot FLAN outperforms zero-shot LaMDA-PT and is comparable with or better than few-shot LaMDA-PT. + +# 4 ABLATION STUDIES & FURTHER ANALYSIS + +# 4.1 NUMBER OF INSTRUCTION TUNING CLUSTERS + +As the core question of our paper asks how instruction tuning improves a model's zero-shot performance on unseen tasks, in this first ablation we examine how performance is affected by the number of clusters and tasks used in instruction tuning. For this setup, we hold out NLI, closed-book QA, and commonsense reasoning as evaluation clusters, and use the seven remaining clusters for instruction tuning. We show results for one to seven instruction tuning clusters, where clusters are added in decreasing order of number of tasks per cluster. + +Figure 6 shows these results. As expected, we observe that average performance across the three held-out clusters improves as we add additional clusters and tasks to instruction tuning (with the exception of the sentiment analysis cluster), confirming the benefits of our proposed instruction tuning approach on zero-shot performance on novel tasks. It is further interesting to see that, for the seven clusters we test, the performance does not appear to saturate, implying that performance may further improve with even more clusters added to instruction tuning. Of note, this ablation does not allow us to draw conclusions about which instruction tuning cluster contributes the most to each evaluation cluster, although we see minimal added value from the sentiment analysis cluster. + +![](images/1392c54b6c275318c795e58e633c04056e13e49663d49f1dadd309ddf68a77b9.jpg) +Figure 6: Adding additional task clusters to instruction tuning improves zero-shot performance on held-out task clusters. The evaluation tasks are the following. Commonsense: CoPA, HellaSwag, PiQA, and StoryCloze. NLI: ANLI R1-R3, QNLI, RTE, SNLI, and WNLI. Closed-book QA: ARC easy, ARC challenge, Natural Questions, and TriviaQA. + +# 4.2 SCALING LAWS + +As Brown et al. (2020) shows that zero and few-shot capabilities of language models substantially improve for larger models, we next explore how the benefits of instruction tuning are affected by model scale. Using the same cluster split as in the previous ablation study, we evaluate the effect of instruction tuning on models of size 422M, 2B, 8B, 68B, and 137B parameters. + +Figure 7 shows these results. We see that for the two models on the order of 100B parameters, instruction tuning substantially improves performance on held-out tasks, as is expected given the prior results in our paper. The behavior on held-out tasks for the 8B and smaller models, however, is thought- + +provoking—instruction tuning actually hurts performance on held-out tasks. One potential explanation for this result could be that for small-scale models, learning the $\sim 40$ tasks used during instruction tuning fills the entire model capacity, causing these models to perform worse on new tasks. Under this potential explanation, for the larger scale models, instruction tuning fills up some model capacity but also teaches these models how to follow instructions, allowing them to generalize to new tasks with the remaining capacity. + +![](images/315cb730f47543a973257f209bb87df66dbb249d9a533839a25ad196a13d0b48.jpg) +Performance on held-out tasks +Figure 7: Whereas instruction tuning helps large models generalize to new tasks, for small models it actually hurts generalization to unseen tasks, potentially because all model capacity is used to learn the mixture of instruction tuning tasks. + +# 4.3 ROLE OF INSTRUCTIONS + +In a final ablation study, we explore the role of instructions during finetuning, as one possibility is that performance gains come entirely from multi-task finetuning and the model could perform just as well without instructions. We hence consider two finetuning setups without instructions. In a no template setup, only inputs and outputs were given to the model (e.g., for translation the input would be "The dog runs." and the output would be "Le chien court.") In a dataset name setup, each input is prepended with the name of the task and dataset (e.g., for translation to French, the input would be "[Translation: WMT'14 to French] The dog runs"). + +We compare these two ablations to FLAN's finetuning procedure, which used natural instructions (e.g., "Please translate this sentence to French: 'The dog runs'"). We perform evaluations for four held-out clus + +ters from Figure 5. For the no template setup, we used the FLAN instructions during zero-shot inference (because if we used no template, the model would not know what task to perform). For models finetuned on dataset name only, we report zero-shot performance for FLAN instructions as well as using the dataset name. Figure 8 shows the results—both ablation configurations performed substantially worse than FLAN, indicating that training with instructions is crucial for zero-shot performance on unseen tasks. + +![](images/d7a555892d8695c0555f0a8a48ac240eef6147b409092ba4004bd9ccfeee000d.jpg) + +![](images/c0d8af4aa4b5cad62c8b6a61714e9f0b34011b2ec93588183a828a72534941ad.jpg) +Figure 8: Ablation study result using models with instructions removed from finetuning (FT). + +# 4.4 INSTRUCTIONS WITH FEW-SHOT EXEMPLARS + +So far, we have focused on instruction tuning in the zero-shot setting. Here, we study how instruction tuning can be used when few-shot exemplars are available at inference time. The format for the few-shot setting builds on the zero-shot format. For some input \( x \) and output \( y \), let \( \mathrm{instruct}(x) \) denote the zero-shot instructions. Then, given \( k \) few-shot exemplars \( (x_i, y_i)_{i=1}^k \) and a new input \( x \), the instruction format for the few-shot setting is "instruct \( (x_1) \oplus y_1 \oplus \mathrm{instruct}(x_2) \oplus y_2 \oplus \ldots \oplus + +$\operatorname{instruct}(x_k) \oplus y_k \oplus \operatorname{instruct}(x)$ ", where $\oplus$ denotes string concatenation with a delimiter token inserted in between. At both training and inference time, exemplars are randomly drawn from the training set, and the number of exemplars is capped at 16 and such that the total sequence length is less than 960 tokens. Our experiment uses the same task splits and evaluation procedure as §3, such that few-shot exemplars for an unseen task are only used at inference time. + +As shown in Figure 9, few-shot exemplars improve the performance on all task clusters, compared with zero-shot FLAN. Exemplars are especially effective for tasks with large/complex output spaces, such as struct to text, translation, and closed-book QA, potentially because exemplars help the model better understand the output format. In addition, for all task clusters, standard deviation among templates is lower for few-shot FLAN, indicating reduced sensitivity to prompt engineering. + +![](images/4f79a9490cde408fc8c493cd814e0a0787d51697b1acfb2299b9ae23fa1e067a.jpg) +Figure 9: Adding few-shot exemplars to FLAN is a complementary method for improving the performance of instruction-tuned models. The orange bars indicate standard deviation among templates, averaged at the dataset level for each task cluster. + +# 4.5 INSTRUCTION TUNING FACILITATES PROMPT TUNING + +As we've seen that instruction tuning improves the ability of a model to respond to instructions, it follows that, if FLAN is indeed more amenable to performing NLP tasks, then it should also achieve better performance when performing inference using soft prompts, represented by prepended continuous variables optimized via prompt tuning (Li & Liang, 2021; Lester et al., 2021). As further analysis, we train continuous prompts for each of the SuperGLUE (Wang et al., 2019a) tasks in accordance with the cluster splits from §2.2 such that when prompt-tuning on task $\mathcal{T}$ , no tasks in the same cluster as $\mathcal{T}$ were seen during instruction tuning. Our prompt tuning setup follows the procedure of Lester et al. (2021) except that we use a prompt length of 10, weight decay of 1e-4, and did not use dropout on the attention scores; we found in preliminary experiments that these changes improved the performance of LaMDA-PT. + +Figure 10 shows the results of these prompt tuning experiments for both using a fully-supervised training set and in a low-resource setting with only 32 training examples. We see that in all sce + +narios, prompt tuning works better with FLAN than LaMDA-PT. In many cases, especially for the low-resource setting, prompt tuning on FLAN even achieves more than $10\%$ improvement over prompt tuning on the LaMDA-PT. This result exemplifies in another way how instruction tuning can result in a checkpoint that is more desirable for performing NLP tasks. + +![](images/5f3f35b1efcac334ba1a48ebf6e4511d3df7ef8e47e33987200b6f8b969889db.jpg) +Figure 10: Instruction-tuned models respond better to continuous inputs from prompt tuning. When prompt tuning on a given dataset, no tasks from the same cluster as that dataset were seen during instruction tuning. Performance shown is the average on the SuperGLUE dev set. + +# 5 RELATED WORK + +Our work relates to several broad research areas including zero-shot learning, prompting, multi-task learning, and language models for NLP applications (Radford et al., 2019; Raffel et al., 2020; Brown et al., 2020; Efrat & Levy, 2020; Aghajanyan et al., 2021; Li & Liang, 2021, inter alia). We describe prior work for these broad areas in an extended related work section (Appendix D), and here we describe two subareas narrower in scope that perhaps relate most closely to our work. + +The way we ask a model to respond to instructions is similar to QA-based task formulation (Kumar et al., 2016; McCann et al., 2018), which aims to unify NLP tasks by casting them as QA over a context. Though these methods are very similar to ours, they mostly focus on multi-task learning instead of zero-shot learning, and—as noted by Liu et al. (2021)—they are generally not motivated by using existing knowledge in pretrained LMs. Moreover, our work supercedes recent work such as Chai et al. (2020) and Zhong et al. (2021) in terms of both model scale and scope of tasks. + +The success of language models has led to nascent research on the ability of models to follow instructions. Most recently, Mishra et al. (2021) finetune 140M parameter BART on instructions with few-shot exemplars, and evaluate its few-shot abilities on unseen tasks—this is similar to our few-shot instruction tuning result from §4.4. This promising result (as well as one from Ye et al. (2021), which does not emphasize instructions as much) suggests that finetuning on a collection of tasks improves few-shot performance on unseen tasks, even at a smaller model scale. Sanh et al. (2021) finetune T5 in a setup similar to ours, finding that zero-shot learning can be improved in a model of 11B parameters. At a model scale similar to ours, OpenAI's InstructGPT models are trained via both finetuning and reinforcement learning to produce outputs that are more preferred by human raters (Ouyang et al., 2022). + +# 6 DISCUSSION + +Our paper has explored a simple question in zero-shot prompting: does finetuning a model on a collection of tasks phrased as instructions improve its performance on unseen tasks? We operationalize this question via instruction tuning, a simple method that combines appealing aspects of both the pretrain–finetune and prompting paradigms. Our instruction-tuned model, FLAN, improves performance against an untuned model and surpasses zero-shot GPT-3 on the majority of tasks that we evaluate on. Ablation studies reveal that performance on unseen tasks improves with the number of instruction tuning task clusters, and, interestingly, that performance improvements from instruction tuning emerge only with sufficient model scale. Moreover, instruction tuning can be combined with other prompting methods such as few-shot prompting and prompt tuning. + +The diverse capabilities of language models at scale have drawn attention to the tradeoffs between specialist models (one model per task) and generalist models (one model for many tasks; Arivazhagan et al., 2019; Pratap et al., 2020), for which our study has potential implications. Although one might expect labeled data to have the most natural role in improving specialist models, instruction tuning demonstrates how labeled data can be used to help large language models perform many, unseen tasks. In other words, the positive effect of instruction tuning on cross-task generalization shows that task-specific training is complementary to general language modeling and motivates further research on generalist models. + +As for limitations of our study, there is a degree of subjectivity in assigning tasks to clusters (though we try to use accepted categorizations in the literature), and we only explore the use of relatively short instructions of typically a single sentence (c.f. detailed instructions given to crowd-workers). A limitation for our evaluation is that individual examples might have appeared in the models' pretraining data, which includes web documents, though in post-hoc analysis (Appendix C) we do not find any evidence that data overlap substantially impacted the results. Finally, the scale of FLAN 137B makes it costly to serve. Future work on instruction tuning could include gathering/generating even more task clusters for finetuning, cross-lingual experiments, using FLAN to generate data for training downstream classifiers, and using finetuning to improve model behavior with respect to bias and fairness (Solaiman & Dennison, 2021). + +# 7 CONCLUSIONS + +This paper has explored a simple method for improving the ability of language models at scale to perform zero-shot tasks based purely on instructions. Our instruction-tuned model, FLAN, compares favorably against GPT-3 and signals the potential ability for language models at scale to follow instructions. We hope that our paper will spur further research on instructions-based NLP, zero-shot learning, and using labeled data to improve large language models. + +# ETHICAL CONSIDERATIONS + +This work uses language models, for which the risks and potential harms are discussed in Bender & Koller (2020), Brown et al. (2020), Bender et al. (2021), Patterson et al., (2021), and others. As our contribution in this paper is not a pretrained language model itself but rather an empirical study of how instruction tuning affects the zero-shot performance of a language model on unseen tasks, we additionally highlight two relevant ethical considerations. First, labeled datasets such as those we use for finetuning can contain undesirable biases, and these biases can be propagated into zero-shot applications of the model on downstream tasks. And second, instruction-tuned models can potentially require less data and expertise to use; such lower barriers to access could increase both the benefits and associated risks of such models. + +# ENVIRONMENTAL CONSIDERATIONS + +We use the same pretrained language models as Austin et al. (2021). The energy cost and carbon footprint for the pretrained models were 451 MWh and 26 tCO2e, respectively. The additional instruction tuning gradient-steps for finetuning FLAN is less than $2\%$ of the number of pretraining steps, and so the estimated additional energy cost is comparatively smaller. + +# AUTHOR CONTRIBUTIONS + +Maarten Bosma conceived the original idea and implemented the first version of FLAN. Vincent Zhao prototyped the training and evaluation pipelines, as well as rank classification. Kelvin Guu proposed and implemented the idea of task clusters and evaluation using inter-cluster splits. Jason Wei, Maarten Bosma, Vincent Zhao, and Adams Wei Yu implemented the NLP tasks. Jason Wei, Vincent Zhao, and Adams Wei Yu conducted and managed most of the experiments. Jason Wei designed and ran the ablation studies. Jason Wei, Maarten Bosma, and Quoc V. Le wrote most of the paper. Jason Wei, Maarten Bosma, and Nan Du obtained the zero and few-shot baselines. Vincent Zhao and Kelvin Guu designed, implemented, and conducted the few-shot FLAN experiments. Maarten Bosma and Jason Wei ran the data contamination analysis. Brian Lester ran the prompt tuning experiments. Quoc V. Le and Andrew M. Dai advised, provided high-level guidance, and helped edit the paper. + +# ACKNOWLEDGEMENTS + +We thank Ed Chi, Slav Petrov, Dan Garrette, Ruibo Liu, and Clara Meister for providing feedback on our manuscript. We thank Adam Roberts, Liam Fedus, Hyung Won Chung, and Noam Shazeer for helping debug some of our models. We thank Ellie Pavlick for feedback on the study design during the middle stages of the project. We thank Daniel De Freitas Adiwardana for helping initiate the project, large language model advising, and giving us access to some computational resources. Finally, we thank the team involved in pretraining LaMDA-PT: Daniel De Freitas Adiwardana, Noam Shazeer, Yanping Huang, Dmitry Lepikhin, Dehao Chen, Yuanzhong Xu and Zhifeng Chen. + +# REFERENCES + +Armen Aghajanyan, Anchit Gupta, Akshit Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. Muppet: Massive multi-task representations with pre-finetuning. arXiv preprint arXiv:2101.11038, 2021. URL https://arxiv.org/abs/2101.11038. +Naveen Arivazhagan, Ankur Bapna, Orhan First, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv preprint arXiv:1907.05019, 2019. URL https://arxiv.org/abs/1907.05019. +Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. URL https://arxiv.org/abs/2108.07732. +Amittai Axelrod, Xiaodong He, and Jianfeng Gao. Domain adaptation via pseudo in-domain data selection. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pp. 355-362, 2011. URL https://aclanthology.org/D11-1033. +Marta Bañón, Pinzhen Chen, Barry Haddow, Kenneth Heafield, Hieu Hoang, Miquel Esplà-Gomis, Mikel L. Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, Elsa Sarrias, Marek Strelec, Brian Thompson, William Waites, Dion Wiggins, and Jaume Zaragoza. ParaCrawl: Web-scale acquisition of parallel corpora. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4555-4567, 2020. URL https://aclanthology.org/2020.acl-main.417. +Emily M. Bender and Alexander Koller. Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5185-5198, 2020. URL https://aclanthology.org/2020.acl-main.463. +Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21, pp. 610-623. Association for Computing Machinery, 2021. URL https://doi.org/10.1145/3442188.3445922. +Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. The Fifth PASCAL Recognizing Textual Entailment Challenge. In TAC, 2009. URL https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.232.1231&rep=rep1&type=pdf. +Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. PIQA: Reasoning about physical commonsense in natural language. In Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020. URL https://arxiv.org/abs/1911.11641. +Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, and Lucia Specia (eds.). Proceedings of the Ninth Workshop on Statistical Machine Translation, 2014. URL https://aclanthology.org/W14-3300. +Ondrej Bojar, Christian Buck, Rajen Chatterjee, Christian Federmann, Liane Guillou, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Aurélie Néveol, Mariana Neves, Pavel Pecina, Martin Popel, Philipp Koehn, Christof Monz, Matteo Negri, Matt Post, Lucia Specia, Karin Verspoor, Jörg Tiedemann, and Marco Turchi (eds.). Proceedings of the First Conference on Machine Translation: Volume 1, Research Papers, 2016. URL https://aclanthology.org/W16-2200. +Rishi Bommasani, Drew A. Hudson, E. Adeli, R. Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, E. Brynjolfsson, S. Buch, D. Card, Rodrigo Castellon, Niladri S. Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, S. Ermon, J. Etchemendy, Kawin Ethayarajh, L. Fei-Fei, Chelsea Finn, Trevor Gale, Lauren E. Gillespie, Karan Goel, Noah D. Goodman, S. Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, + +Jenny Hong, Kyle Hsu, Jing Huang, Thomas F. Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, G. Keeling, Fereshte Khani, O. Khattab, Pang Wei Koh, M. Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, J. Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir P. Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, D. Narayanan, Ben Newman, Allen Nie, J. C. Niebles, H. Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, C. Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Robert Reich, Hongyu Ren, Frieda Rong, Yusuf H. Roohani, Camilo Ruiz, Jack Ryan, Christopher R'e, D. Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, K. Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, M. Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. URL https://arxiv.org/abs/2108.07258. +Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 632-642, 2015. URL https://aclanthology.org/D15-1075. +Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pp. 1877-1901, 2020. URL https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfbcb4967418bfb8ac142f64a-Paper.pdf. +Duo Chai, Wei Wu, Qinghong Han, Fei Wu, and Jiwei Li. Description based text classification with reinforcement learning. In Proceedings of the International Conference on Machine Learning, pp. 1371-1382. PMLR, 2020. URL http://proceedings.mlr.press/v119/chai20a/chai20a.pdf. +Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harri Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. URL https://arxiv.org/abs/2107.03374. +Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. QuAC: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2174-2184, 2018. URL https://aclanthology.org/D18-1241. +Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2924–2936, 2019a. URL https://aclanthology.org/N19-1300. +Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, and Quoc V. Le. BAM! born-again multi-task networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5931-5937, 2019b. URL https://aclanthology.org/P19-1595. +Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? Try ARC, the AI2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018. URL https://arxiv.org/abs/1803.05457. +Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning + +Research, 12:2493-2537, 2011. URL https://www.jmlr.org/papers/volume12/ collobert11a/collobert11a.pdf. +Michele Corazza, Stefano Menini, Elena Cabrio, Sara Tonelli, and Serena Villata. Hybrid emoji-based masked language models for zero-shot abusive language detection. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 943-949, 2020. URL https://aclanthology.org/2020-findings-emnlp.84. +Ido Dagan, Oren Glickman, and Bernardo Magnini. The PASCAL Recognising Textual Entailment challenge. In Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW'05, pp. 177-190, 2005. URL https://doi.org/10.1007/11736790_9. +Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Proceedings of the Conference on Neural Information Processing Systems, 2015. URL https://papers.nips.cc/paper/2015/file/7137debd45ae4d0ab9aa953017286b20-Paper.pdf. +Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. The CommitmentBank: Investigating projection in naturally occurring discourse. In Proceedings of Sinn und Bedeutung, pp. 107-124, 2019. URL https://ojs.ulb.uni-konstanz.de/sub/index.php/sub/article/view/601. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, 2019. URL https://aclanthology.org/N19-1423. +William B. Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005. URL https://aclanthology.org/I05-5002. +Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan First, et al. GLaM: Efficient scaling of language models with mixture-of-experts. arXiv preprint arXiv:2112.06905, 2021. URL https://arxiv.org/pdf/2112.06905. +Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2368-2378, 2019. URL https://aclanthology.org/N19-1246. +Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heafield. Edinburgh's phrase-based machine translation systems for WMT-14. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pp. 97-104, 2014. URL https://aclanthology.org/W14-3309. +Ondrej Dusek, David M. Howcroft, and Verena Rieser. Semantic noise matters for neural natural language generation. In Proceedings of the 12th International Conference on Natural Language Generation, pp. 421-426, 2019. URL https://aclanthology.org/W19-8652. +Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 489-500, 2018. URL https://aclanthology.org/D18-1045. +Avia Efrat and Omer Levy. The Turking Test: Can language models understand instructions? arXiv preprint arXiv:2010.11982, 2020. URL https://arxiv.org/abs/2010.11982. +Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1074-1084, 2019. URL https://aclanthology.org/P19-1102. + +Fast.AI. Yelp Sentiment Classification Dataset. https://course.fast.ai/datasets. +William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv preprint arXiv:2101.03961, 2021. URL https://arxiv.org/abs/2101.03961. +Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the International Conference on Machine Learning (ICML), pp. 1126-1135, 2017. URL https://arxiv.org/abs/1703.03400. +Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 3816-3830, 2021. URL https://aclanthology.org/2021.acl-long.295. +Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. The WebNLG challenge: Generating text from RDF data. In Proceedings of the 10th International Conference on Natural Language Generation, pp. 124-133, 2017. URL https://aclanthology.org/W17-3518. +Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Anuoluwapo Aremu, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, Wanyu Du, Esin Durmus, Ondrej Dušek, Chris Chinenye Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Mihir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Andre Niyongabo Rubungo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, João Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, and Jiawei Zhou. The GEM benchmark: Natural language generation, its evaluation and metrics. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pp. 96–120, 2021. URL https://aclanthology.org/2021.gem-1.10. +Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pp. 1-9, 2007. URL https://aclanthology.org/W07-1401. +Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. SAMSum corpus: A human-annotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pp. 70–79, 2019. URL https://aclanthology.org/D19-5409. +Alec Go, Richa Bhayani, and Lei Huang. Twitter sentiment classification using distant supervision. CS224N project report, Stanford, 1(12):2009, 2009. URL https://www-cs.stanford.edu/people/alecmgo/papers/TwitterDistantSupervision09.pdf. +Dan Goldwasser and Dan Roth. Learning from natural instructions. Machine learning, 94(2):205-232, 2014. URL https://link.springer.com/article/10.1007/s10994-013-5407-y. +Max Grusky, Mor Naaman, and Yoav Artzi. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 708-719, 2018. URL https://aclanthology.org/N18-1065. +R Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. The Second PASCAL Recognising Textual Entailment Challenge. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment, 2006. URL: http://www.cs.biu.ac.il/~szpekti/papers/RTE2-organizers.pdf. + +Luheng He, Mike Lewis, and Luke Zettlemoyer. Question-answer driven semantic role labeling: Using natural language to annotate natural language. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 643-653, 2015. URL https://aclanthology.org/D15-1076. +Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, and Luke Zettlemoyer. Surface form competition: Why the highest probability answer isn't always right. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021. URL https://aclanthology.org/2021.emnlp-main.564. +Eduard Hovy, Laurie Gerber, Ulf Hermjakob, Chin-Yew Lin, and Deepak Ravichandran. Toward semantics-based answer pinpointing. In Proceedings of the First International Conference on Human Language Technology Research, 2001. URL https://www.aclweb.org/anthology/H01-1069. +Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 328-339, 2018. URL https://aclanthology.org/P18-1031. +Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2391-2401, 2019. URL https://aclanthology.org/D19-1243. +Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351, 2017. URL https://aclanthology.org/Q17-1024. +Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601-1611, 2017. URL https://aclanthology.org/P17-1147. +Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 252-262, 2018. URL https://aclanthology.org/N18-1023. +Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. UNIFIEDQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 1896-1907, 2020. URL https://aclanthology.org/2020-findings-emnlp.171. +Dimitrios Kotzias, Misha Denil, Nando de Freitas, and Padhraic Smyth. From group to individual labels using deep features. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015. URL https://dl.acm.org/doi/10.1145/2783258.2783380. +Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Eduardo Blanco and Wei Lu (eds.), Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018: System Demonstrations, Brussels, Belgium, October 31 - November 4, 2018, pp. 66-71. Association for Computational Linguistics, 2018. doi: 10.18653/v1/d18-2012. URL https://doi.org/10.18653/v1/d18-2012. +Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. In Proceedings of the International Conference on Machine Learning, pp. 1378-1387. PMLR, 2016. URL https://arxiv.org/abs/1506.07285. + +Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural Questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452-466, 2019. URL https://aclanthology.org/Q19-1026. +Faisal Ladhak, Esin Durmus, Claire Cardie, and Kathleen McKeown. WikiLingua: A new benchmark dataset for cross-lingual abstractive summarization. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 4034–4048, 2020. URL https://aclanthology.org/2020-findings-emnlp.360. +Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 951-958. IEEE, 2009. URL https://ieeexplore.ieee.org/document/5206594. +Anne Lauscher, Vinit Ravishankar, Ivan Vulic, and Goran Glavaš. From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 4483-4499, 2020. URL https://aclanthology.org/2020.emnlp-main.363. +Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6086-6096, 2019. URL https://aclanthology.org/P19-1612. +Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan First, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=qrwe7XHTmYb. +Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2021. URL https://arxiv.org/abs/2104.08691. +Hector Levesque, Ernest Davis, and Leora Morgenstern. The Winograd Schema Challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning, 2012. URL https://dl.acm.org/doi/10.5555/3031843.3031909. +Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. Zero-shot relation extraction via reading comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pp. 333-342, 2017. URL https://aclanthology.org/K17-1034. +Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871-7880, 2020. URL https://aclanthology.org/2020.acl-main.703. +Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4582-4597, 2021. URL https://aclanthology.org/2021.acl-long.353. +Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. A unified MRC framework for named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5849-5859, 2020. URL https://aclanthology.org/2020.acl-main.519. + +Xin Li and Dan Roth. Learning question classifiers. In COLING 2002: The 19th International Conference on Computational Linguistics, 2002. URL https://www.aclweb.org/anthology/C02-1150. +Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. CommonGen: A constrained text generation challenge for generative commonsense reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 1823-1840, 2020. URL https://aclanthology.org/2020.findings-emnlp.165. +Han Liu, Xiaotong Zhang, Lu Fan, Xuandi Fu, Qimai Li, Xiao-Ming Wu, and Albert Y.S. Lam. Reconstructing capsule networks for zero-shot intent classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4799-4809, 2019a. URL https://aclanthology.org/D19-1486. +Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhenbao Jiang, Hiroaki Hayashi, and Graham Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586, 2021. URL https://arxiv.org/abs/2107.13586. +Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4487-4496, 2019b. URL https://aclanthology.org/P19-1441. +Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742, 2020. URL https://aclanthology.org/2020.tacl-1.47. +Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task sequence to sequence learning. Proceedings of ICLR, 2016. URL https://arxiv.org/abs/1511.06114. +Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142-150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P11-1015. +Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730, 2018. URL https://arxiv.org/abs/1806.08730. +John McCarthy. Programs with common sense. RLE and MIT computation center, 1960. URL http://jmc.stanford.edu/articles/mcc59/mcc59.pdf. +Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? A new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2381-2391, 2018. URL https://aclanthology.org/D18-1260. +Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. Metaicl: Learning to learn in context. arXiv preprint arXiv:2110.15943, 2021. URL https://arxiv.org/abs/2110.15943. +Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. Natural Instructions: Benchmarking generalization to new tasks from natural language instructions. arXiv preprint arXiv:2104.08773, 2021. URL https://arxiv.org/abs/2104.08773. +Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 839-849, 2016. URL https://aclanthology.org/N16-1098. + +Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, and Nazneen Fatema Rajani. DART: Open-domain structured data record to text generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 432-447, 2021. URL https://aclanthology.org/2021.naacl-main.37. +Courtney Naples, Matthew Gormley, and Benjamin Van Durme. Annotated Gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX), pp. 95-100, 2012. URL https://aclanthology.org/W12-3018. +Shashi Narayan, Shay B. Cohen, and Mirella Lapata. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1797-1807, 2018. URL https://aclanthology.org/D18-1206. +Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. Adversarial NLI: A new benchmark for natural language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4885-4901, 2020. URL https://aclanthology.org/2020.acl-main.441. +Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. Preprint, 2022. URL https://cdn.openai.com/papers/Training_language_models_to_FOLLOW_instructions_with_human_feedback.pdf. +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2227-2237, 2018. URL https://aclanthology.org/N18-1202. +Ngoc-Quan Pham, Jan Niehues, Thanh-Le Ha, and Alexander Waibel. Improving zero-shot translation with language-independent constraints. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pp. 13-23, 2019. URL https://aclanthology.org/W19-5202. +Mohammad Taher Pilehvar and Jose Camacho-Collados. WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 1267-1273, 2019. URL https://aclanthology.org/N19-1128. +Vineel Pratap, Anuroop Sriram, Paden Tomasello, Awni Hannun, Vitaliy Liptchinsky, Gabriel Synnaeve, and Ronan Collobert. Massively multilingual ASR: 50 languages, 1 model, 1 billion parameters. arXiv preprint arXiv:2007.03001, 2020. URL https://arxiv.org/abs/2007.03001. +Guanghui Qin and Jason Eisner. Learning how to ask: Querying LMs with mixtures of soft prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pp. 5203-5212, 2021. URL http://cs.jhu.edu/~jason/papers/#qin-eisner-2021. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. https://blog.openai.com/language-unsupervised, 2018. + +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. URL https://d4mucfpkseywv.cloudfront.net/better-language-models/language_models—are_unsupervisedMULTITASK_learners.pdf. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67, 2020. URL http://jmlr.org/papers/v21/20-074.html. +Altaf Rahman and Vincent Ng. Resolving complex cases of definite pronouns: The Winograd schema challenge. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 777-789, 2012. URL https://aclanthology.org/D12-1071. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383-2392, 2016. URL https://aclanthology.org/D16-1264. +Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 784-789, 2018. URL https://aclanthology.org/P18-2124. +Siva Reddy, Danqi Chen, and Christopher D. Manning. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266, 2019. URL https://aclanthology.org/Q19-1016. +Emily Reif, Daphne Ippolito, Ann Yuan, Andy Coenen, Chris Callison-Burch, and Jason Wei. A recipe for arbitrary text style transfer with large language models. arXiv preprint arXiv:2109.03910, 2021. URL https://arxiv.org/abs/2109.03910. +Melissa Roemmele, Cosmin Bejan, and Andrew Gordon. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In AAAI Spring Symposium Series, 2011. URL https://www.aaai.org/ocs/index.php/SSS/SSS11/paper/view/2418. +Bernardino Romera-Paredes and Philip Torr. An embarrassingly simple approach to zero-shot learning. In Proceedings of the International Conference on Machine Learning, pp. 2152-2161, 2015. URL https://proceedings.mlr.press/v37/romera-paredes15.pdf. +Sebastian Ruder. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017. URL https://arxiv.org/abs/1706.05098. +Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. WinoGrande: An adversarial winograd schema challenge at scale. In Proceedings of the AAAI Conference on Artificial Intelligence, pp. 8732-8740, 2020. URL https://arxiv.org/abs/1907.10641. +Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. Proceedings of the International Conference on Learning Representations, 2021. URL https://arxiv.org/abs/2110.08207. +David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical reasoning abilities of neural models. Proceedings of the International Conference on Learning Representations, 2019. URL https://arxiv.org/pdf/1904.01557. +Timo Schick and Hinrich Schütze. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pp. 255–269, 2021. URL https://aclanthology.org/2021.eacl-main.20. + +Abigail See, Peter J. Liu, and Christopher D. Manning. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1073-1083, 2017. URL https://aclanthology.org/P17-1099. +Rico Sennrich, Barry Haddow, and Alexandra Birch. Edinburgh neural machine translation systems for WMT 16. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pp. 371-376, 2016. URL https://aclanthology.org/W16-2323. +Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pp. 4596-4604. PMLR, 2018. URL https://arxiv.org/abs/1804.04235. +Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631-1642, 2013. URL https://aclanthology.org/D13-1170. +Irene Solaiman and Christy Dennison. Process for adapting language models to society (palms) with values-targeted datasets. arXiv preprint arXiv:2106.10328, 2021. URL https://arxiv.org/abs/2106.10328. +Shashank Srivastava, Igor Labutov, and Tom Mitchell. Zero-shot learning of classifiers from natural language quantification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 306-316, 2018. URL https://aclanthology.org/P18-1029. +Derek Tam, Menton Rakesh R., Mohit Bansal, Shashank Srivastava, and Colin Raffel. Improving and simplifying pattern exploiting training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021. URL https://arxiv.org/pdf/2103.11955. +Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239, 2022. URL https://arxiv.org/pdf/2201.08239. +Joaquin Vanschoren. Meta-learning: A survey. arXiv preprint arXiv:1810.03548, 2018. URL https://arxiv.org/abs/1810.03548. +Marc Velay and Fabrice Daniel. Seq2seq and multi-task learning for joint intent and content extraction for domain specific interpreters. arXiv preprint arXiv:1808.00423, 2018. URL https://arxiv.org/abs/1808.00423. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353-355, 2018. URL https://aclanthology.org/W18-5446. +Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. Conference on Neural Information Processing Systems (NeurIPS), 2019a. URL https://arxiv.org/abs/1905.00537. +Lu Wang and Wang Ling. Neural network-based abstract generation for opinions and arguments. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 47-57, 2016. URL https://www.aclweb.org/anthology/N16-1007. +Yiren Wang, Yingce Xia, Tianyu He, Fei Tian, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. Multiagent dual learning. In Proceedings of the International Conference on Learning Representations (ICLR) 2019, 2019b. URL https://openreview.net/forum?id=HyGhN2A5tm. + +Zirui Wang, Adams Wei Yu, Orhan First, and Yuan Cao. Towards zero-label language learning. arXiv preprint arXiv:2109.09193, 2021. URL https://arxiv.org/abs/2109.09193. +Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641, 2019. doi: 10.1162/tacl_a_00290. URL https://aclanthology.org/Q19-1040. +Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022. URL https://arxiv.org/pdf/2201.11903. +Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112-1122, 2018. URL http://aclweb.org/anthology/N18-1101. +Joseph Worsham and J. Kalita. Multi-task learning for natural language processing in the 2020s: where are we going? arXiv preprint arXiv:2007.16008, 2020. URL https://arxiv.org/abs/2007.16008. +Jeff Wu, Long Ouyang, Daniel M Ziegler, Nissan Stiannon, Ryan Lowe, Jan Leike, and Paul Christiano. Recursively summarizing books with human feedback. arXiv preprint arXiv:2109.10862, 2021. URL https://arxiv.org/abs/2109.10862. +Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, and Jiwei Li. CorefQA: Coreference resolution as query-based span prediction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 6953-6963, 2020. URL https://aclanthology.org/2020.acl-main.622. +Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. Crossfit: A few-shot learning challenge for cross-task generalization in NLP. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021. URL https://arxiv.org/abs/2104.08835. +Wenpeng Yin, Jamaal Hay, and Dan Roth. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3914-3923, 2019. URL https://aclanthology.org/D19-1404. +Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4791-4800, 2019. URL https://aclanthology.org/P19-1472. +Rui Zhang and Joel Tetreault. This email could save your life: Introducing the task of email subject line generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019. URL https://aclanthology.org/P19-1043. +Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. Record: Bridging the gap between human and machine commonsense reading comprehension. CoRR, abs/1810.12885, 2018. URL http://arxiv.org/abs/1810.12885. +Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In NIPS, 2015. URL https://proceedings.neurips.cc/paper/2015/file/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf. +Yuan Zhang, Jason Baldridge, and Luheng He. PAWS: Paraphrase adversaries from word scrambling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 1298-1308, 2019. URL https://aclanthology.org/N19-1131. +Ruiqi Zhong, Kristy Lee, Zheng Zhang, and Dan Klein. Meta-tuning language models to answer prompts better. arXiv preprint arXiv:2104.04670, 2021. URL https://arxiv.org/abs/2104.04670. + +# A ADDITIONAL RESULTS + +This section shows the full results for all datasets we evaluate. Results for translation and struct to text are shown in Table 1, and the results for eight NLU task clusters are shown in Table 2. + +We show FLAN's performance using the best of up to ten instruction templates as well as the template with the best performance on the dev set. For LaMDA-PT, we use the templates from Brown et al. (2020), which were optimized for GPT-3, without performing any prompt engineering to optimize them on our model. For simplicity, we use greedy search for all generative tasks (compared with beam search used in Brown et al. (2020)). Unlike GPT-3, which chooses the number of few-shot exemplars $k$ via best dev set performance, for few-shot LaMDA-PT we choose the highest $k$ that fits in the context length of 1024 tokens, from $k \in \{1, 3, 5, 10\}$ . + +For DROP (Dua et al., 2019) and SQuADv2 (Rajpurkar et al., 2018), based on email correspondence with Brown et al. (2020), their definition of zero-shot differs from ours in that they actually use exemplars, but only from the same passage as the inference question (each passage has more than one question). Hence, GPT-3 zero-shot results are not directly comparable with ours for DROP and SQuADv2. We mark these results using the $\dagger$ symbol. Moreover, it is unclear how to parse the end of an answer for these two datasets, and so we use curly bracket delimiters { and } , where we expect } to indicate the end of the answer. + +For struct to text, reported T5/mT5 results are from the GEM benchmark paper (Gehrmann et al., 2021), though we do not report their results for DART (through correspondence with authors, we confirmed that their results for DART were incorrect). Though we use a summarization task cluster during instruction tuning, we leave evaluation of summarization for future work, as the mean input of most summarization datasets exceeds FLAN's input length of 1024 tokens. + +
MetricLamDA-PTFLAN 137B
Supervised Modelzero-shotfew-shot [k]zero-shotfew-shot [k]zero-shotfew-shot#t
average templatebest dev templateaverage templatebest dev template
TRANSLATION
WMT '14 En→FrBLEU35.0d11.231.5 [5]25.232.6 [64]32.9±1.133.933.9±0.233.85
WMT '14 Fr→EnBLEU45.6c7.234.7 [5]21.239.2 [64]35.5±1.335.938.0±0.137.93
WMT '16 En→DeBLEU38.6f7.726.7 [5]24.629.7 [64]25.4±1.827.026.8±0.426.15
WMT '16 De→EnBLEU41.2e20.836.8 [5]27.240.6 [64]38.9±0.338.940.6±0.140.73
WMT '16 En→RoBLEU39.9g3.522.9 [5]14.121.0 [64]16.7±1.618.920.5±0.120.55
WMT '16 Ro→EnBLEU38.5g9.737.5 [5]19.939.5 [64]36.8±0.537.338.2±0.138.13
STRUCT TO TEXT
CommonGenRouge-164.0a3.956.7 [3]--54.6±2.356.356.6±0.356.46
Rouge-229.4a1.529.6 [3]--28.8±2.427.630.9±0.729.96
Rouge-L54.5a3.248.5 [3]--48.4±1.948.750.7±0.251.06
DARTRouge-1-11.356.0 [3]--45.5±4.248.957.9±1.659.27
Rouge-2-1.529.6 [3]--25.0±3.730.035.8±1.036.27
Rouge-L-3.248.5 [3]--38.4±3.843.448.5±0.948.27
E2ENLGRouge-172.6a6.256.7 [3]--44.8±3.951.459.1±1.359.79
Rouge-247.5a2.531.4 [3]--24.2±3.630.133.2±1.133.69
Rouge-L56.4a4.941.1 [3]--37.0±3.542.444.9±0.845.19
WebNLGRouge-183.5a13.968.3 [3]--50.6±4.757.768.5±2.271.28
Rouge-263.6a6.946.0 [3]--29.8±4.235.448.0±1.549.88
Rouge-L71.0a11.856.5 [3]--43.4±4.549.758.8±1.160.28
+ +Table 1: Results for translation and struct-to-text tasks. $[k]$ indicates the number of few-shot exemplars. $\#t$ indicates the number of templates that FLAN is evaluated on. $^a$ T5-11B, $^c$ Edunov et al. (2018), $^d$ Durrani et al. (2014), $^e$ Wang et al. (2019b), $^f$ Sennrich et al. (2016), $^g$ Liu et al. (2020). + +
Random Supervised ModelGLaMLaMDA-PTGPT-3 175BFLAN 137B
zero-shotone-shotzero-shotfew-shotzero-shotfew-shotzero-shotaverage templatebest dev templateaverage templatebest dev template
NLI
ANLI R133.357.4b40.942.439.639.0 [5]34.636.8 [50]47.7±1.446.444.2±2.347.9[6] 8
ANLI R233.348.3b38.240.039.937.5 [5]35.434.0 [50]43.9±1.344.041.6±1.441.1[6] 8
ANLI R333.343.5b40.940.839.340.7 [5]34.540.2 [50]47.0±1.348.542.8±2.246.8[6] 8
CB33.393.6a33.973.242.934.4 [5]46.482.1 [32]64.1±14.783.982.6±4.482.1[7] 10
MNLI-m33.392.2a--35.743.7 [5]--51.1±6.261.260.8±3.763.5[10] 10
MNLI-mm33.391.9a--37.043.8 [5]--51.0±6.562.461.0±3.563.5[10] 10
QNLI50.096.9a--50.655.7 [5]--59.6±4.966.462.0±1.763.3[12] 9
RTE50.092.5a68.871.573.370.8 [5]63.572.9 [32]78.3±7.984.179.9±6.984.5[8] 10
SNLI33.391.3b--33.354.7 [5]--43.0±7.453.462.3±2.465.6[15] 9
WNLI50.094.5a--56.364.8 [5]--61.0±10.674.655.4±11.070.4[14] 10
READING COMP.
BoolQ50.091.2a83.082.881.080.0 [1]60.577.5 [32]80.2±3.182.983.6±0.884.6[4] 9
DROP-80.5b54.955.23.810.3 [1]23.6†36.5 [20]21.9±0.922.722.3±1.123.9[2] 7
MultiRC-88.1a45.162.060.059.6 [5]72.974.8 [32]74.5±3.777.569.2±3.272.1[1] 8
OBQA25.085.4a53.055.241.850.6 [10]57.665.4 [100]77.4±1.378.477.2±1.378.2[16] 7
SQuADv1-96.2a--22.750.2 [3]--79.5±1.680.182.1±0.582.7[4] 8
SQuADv2-83.4b68.370.011.134.9 [3]59.5†69.8 [16]40.9±1.844.240.8±0.943.1[3] 10
CLOSED-BOOK QA
ARC-c25.081.1a48.250.342.049.4 [10]51.451.5 [50]61.7±1.463.163.7±0.663.8[13] 7
ARC-e25.092.6a71.976.676.480.9 [10]68.870.1 [50]79.5±0.879.680.5±0.580.7[14] 7
NQ-36.6a21.523.93.222.1 [5]14.629.9 [64]18.6±2.720.727.2±0.527.6[16] 10
TQA (wiki)-60.5a68.871.521.963.3 [10]64.371.2 [64]66.5±2.668.166.5±1.067.3[16] 10
TQA (tfds-dev)-51.0a--18.455.1 [10]--55.0±2.356.757.2±0.657.8[16] 10
COMMONSENSE
COPA50.094.8a90.092.090.089.0 [10]91.092.0 [32]90.6±2.091.088.5±3.887.0[16] 8
HellaSwag25.047.3b77.176.857.058.8 [10]78.979.3 [20]56.4±0.556.759.4±0.259.2[3] 8
PIQA50.066.8b80.481.480.3*80.2* [10]81.082.3 [50]80.9*±0.880.5*82.1*±0.381.7*[10] 8
StoryCloze50.089.2b82.584.079.583.7 [10]83.287.7 [70]92.2±1.393.493.3±0.994.7[10] 8
SENTIMENT
IMDB50.095.5b--76.983.3 [1]--94.1±0.494.394.8±0.395.0[2] 7
Sent14050.087.0b--41.463.3 [5]--69.9±2.573.568.7±1.269.3[16] 6
SST-250.097.5a--51.092.3 [5]71.695.6 [8]92.6±1.794.694.4±0.894.6[16] 8
Yelp50.098.1b--84.789.6 [3]--97.8±0.298.197.9±0.198.0[4] 7
PARAPHRASE
MRPC50.090.4a--53.764.0 [5]--69.1±1.369.167.5±1.767.2[10] 10
QQP50.090.6a--34.958.9 [3]--72.1±6.875.973.5±2.975.9[16] 7
PAWS Wiki50.091.9a--45.553.5 [5]--61.5±6.569.466.2±0.970.2[10] 10
COREFERENCE
DPR50.084.8b--54.657.3 [5]--60.3±3.566.862.4±1.663.3[16] 10
Winogrande50.065.8b73.473.068.368.4 [10]70.277.7 [50]67.3±2.571.272.3±0.972.8[16] 10
WSC27350.070.0b86.883.981.061.5 [5]88.388.5 [32]80.8±3.7-- ± --[-] 10
READ.COMP.W/COMMONSENSE
CosmosQA25.067.1b--34.133.8 [5]--58.4±1.360.656.7±1.356.0[5] 8
ReCoRD-93.4a90.390.387.8*87.6*[1]90.289.0 [32]67.8*±3.072.5*77.0*±2.079.0*[1] 10
+ +Table 2: Results for eight NLU task clusters. All values shown are for accuracy (or exact match) except DROP, MultiRC, and SQuAD v1 and v2, which are F1. $[k]$ indicates the number of few-shot exemplars. $^{\# t}$ indicates the number of templates that FLAN is evaluated on. $^{a}\mathrm{T}5 - 11\mathrm{B},$ $^b$ BERT-large. \*see data contamination (Appendix C). WSC273 (Levesque et al., 2012) does not have training or validation sets, and so we do not compute few-shot results for FLAN. For Trivia QA (TQA), we report exact match (EM) on both the wikipedia subset of the dev set to compare with GPT-3, as well as the full TFDS dev set. + +# B FURTHER ABLATION STUDIES AND ANALYSIS + +# B.1 DATASETS PER TASK CLUSTER &TEMPLATES PER DATASET + +Our primary hypothesis is that instruction tuning on a diverse set of tasks improves performance on unseen tasks. §4.1 showed that adding more task clusters improves performance; here, we further explore whether adding additional datasets improves performance when the number of task clusters is held constant. We use the same split as in §4.1, where the NLI, commonsense reasoning, and closed-book QA clusters are held-out, and seven other task clusters remain for instruction tuning. For these seven task clusters, we instruction tune models using just one dataset per task cluster and using four datasets per task cluster (for task clusters that did not have four tasks, we just used all available tasks). In addition, we simultaneously explore the role of the number of instruction templates per dataset; as mentioned in §2.1, for each dataset we manually composed ten instructional templates for instruction tuning. Here, we instruction tune models using 1, 4, and 10 templates per dataset. + +Figure 11 shows these results. Using more datasets per cluster improved performance by almost $10\%$ on average across the three held-out clusters. Using more templates per dataset, however, had a comparatively negligible effect on performance when there was one task per cluster, which disappeared when there were four tasks per cluster. The small effect of templates is striking given our original motivation that composing ten templates per task would mitigate overfitting to any particular template. This results serves to underscore, however, the unpredictability of finetuning large language models, as one hypothesis is that models at such scale do not easily overfit to a finetuning single task. + +![](images/36db96a366d099ce143d6e8d1b0fd2507b965c49fe007dc77aa6e0d49859c707.jpg) +Figure 11: Effect of datasets per task cluster and templates per dataset on performance on three held-out clusters: NLI, commonsense reasoning, and closed-book QA. Adding more datasets per task cluster substantially improves performance. Using more templates per dataset, however, only had a very small effect on performance, which disappeared when there were sufficient datasets per task cluster. + +# B.2 ROLE OF INSTRUCTIONS DURING FINETUNING + +The per-cluster results for the ablation study from §4.3 are shown in Table 3. + +# B.3 FURTHER ANALYSIS: INSTRUCTION TUNING FACILITATES PROMPT TUNING + +The per-dataset results for the analysis in §4.5 are given in Table 4. As the above tasks are all classification, further work in this direction might include tasks such as summarization or question answering, or try to finetune the model using the supervised datasets. + +# C DATA CONTAMINATION ANALYSIS + +One reasonable concern is that since the pretraining corpus of FLAN has more than 2 trillion tokens, it is possible that examples from a given evaluation dataset may have already been seen verbatim by the model during pre-training, hence inflating the performance of our purported zero-shot model. To this end, like GPT-3 (Brown et al., 2020), we perform post-hoc data contamination analysis to + +
Finetuning promptInference promptZero-shot performance on unseen task cluster
NLIRead. Comp.Closed- Book QATranslationFour-Task Average
Natural instructions (= FLAN)Natural instructions56.277.456.630.755.2
No templateNatural instructions50.558.225.515.037.3
Task/dataset nameNatural instructions52.863.044.825.946.6
Task/dataset nameTask/dataset name60.264.940.821.947.0
+ +Table 3: Ablation study result using models where instructions are removed from the finetuning process. In "no template," only inputs and outputs are given, which does not distinguish among tasks during multi-task finetuning. In "task/dataset name", inputs during multi-task finetuning are prepped with the name of the task and dataset (e.g., "[Translation: WMT'14 to French] The dog runs") NLI datasets: ANLI R1-R3, CB, and RTE; reading comprehension datasets: BoolQ, MultiRC, and OpenbookQA; closed-book QA datasets: ARC-c, ARC-e, NQ, and TQA; translation datasets: WMT'14 Fr↔En, WMT'16 De↔En, and WMT'16 Ro↔En. Notably, training with task/dataset name achieved a high NLI score largely because it achieved a score of 83.9 on the CB dataset, for which the validation set only has 56 examples (FLAN also gets 83.9 with the best dev template, but the average template was only 64.1). + +
Prompt tuning train. examplesPROMPT TUNING ANALYSIS
BoolQ acc.CB acc.CoPA acc.MultiRC F1ReCoRD acc.RTE acc.WiC acc.WSC acc.
LaMDA-PT3255.555.487.065.478.052.451.665.4
FLAN77.587.591.076.880.883.057.870.2
LaMDA-PTfull82.887.590.078.684.882.054.972.7
FLANdataset86.398.294.083.485.191.774.086.5
+ +Table 4: FLAN (instruction tuning) responds better to continuous inputs attained via prompt tuning than LaMDA-PT (no instruction tuning). When prompt tuning on a given dataset, no tasks from the same cluster as that dataset were seen during instruction tuning. + +investigate whether the performance of the model is in fact inflated by evaluating on examples that occurred in the pretraining dataset. + +Our data contamination procedure follows the setup of Brown et al. (2020), which, for each benchmark, produces a "clean" version that removes all potentially leaked examples, defined as examples for which any $n$ -gram ( $n$ varies per dataset but is roughly 13) overlapped with anything in the pretraining corpus. We use the same $n$ per dataset as Brown et al. (2020) and also split on spaces. We then evaluate our model on this clean subset, comparing against model performance on the original dataset ( $\text{clean} + \text{dirty}$ ). Lower performance on the clean subset would suggest that data contamination leads to inflated results. + +Figure 12 summarizes these results, with the exact numbers given in Table 5. We see several trends very similar to those in the GPT-3 paper: (1) many datasets had a substantial number of examples that overlapped with the pretraining data, (2) across all datasets, we do not see a correlation that evaluating on clean data does worse than evaluating on the total dataset, and (3) as datasets had fewer clean examples, there was higher variance in the percent change in performance (likely due to a smaller number of clean examples). + +Like GPT-3, we also found that DROP and SQuADv2 had almost total overlap with the pretraining data. We follow their procedure of manually inspecting the data, and find that most overlapping $n$ -grams were only in the contexts of examples (99.6% for DROP and 97.2% for SQuADv2). Overlaps never occurred in both the question and answer for DROP, and only occurred for both the question and answer for SQuADv2 in 5 of the 11,153 evaluation examples. Hence, for these two datasets, the + +![](images/f57d574e222f76f75b1e84d3d07c583fd1d8da837c0e7c32a6a7c1a48c4566e1.jpg) +Figure 12: Like GPT-3, we also measured performance on cleaned versions of our datasets, which had high confidence to be unseen in the pretraining data of FLAN. We do not see a correlation that FLAN performed better on evaluation sets for which examples occurred more often in the pretraining data. When the percent of clean data is very small, there are fewer examples for computing the clean performance, which leads to high variance. + +model gains only background information and cannot memorize the answer to any specific questions (aside from the five examples in SQuADv2). + +ANLI R1 and R2 (Nie et al., 2020) also had almost complete data contamination, to a much higher degree than GPT-3. Upon further inspection, we see that most overlaps occur in example contexts and not hypotheses (97.3% for ANLI R1 and 98.2% for ANLI R2). As ANLI R1 and R2 are based entirely from Wikipedia examples (R3 is not), we posit that this higher degree of contamination in our pretraining dataset compared with GPT-3's is potentially due to using a more-recent version of Wikipedia that includes the contexts used in ANLI R1 and R2 (which were collected in 2019). Because seeing a particular context in pretraining does not help with the NLI task given a new, unseen sentence, we think it is unlikely that these overlaps affected performance on the two datasets. + +Of the remaining datasets, only ReCoRD and PIQA had a clean subset performance that was lower than the overall evaluation set performance by more than $1\%$ . These two datasets are language modeling (i.e., "what's the best continuation of this sentence?"), and so it is more likely compared with previous tasks that seeing a complete sentence in the pretraining data could help the model predict the right answer in downstream evaluations. For PIQA, both the goal and solution had overlaps in 93 of the 1,838 evaluation examples, and for ReCoRD, the query had overlaps in 2,320 of the 10,000 training examples. We hence mark these results with an asterisk * in Table 2. Brown et al. (2020) also reported substantial contamination rates for these two datasets (61% dirty for ReCoRD and 29% for PIQA), and also mark PIQA with an asterisk. + +As this overlap analysis follows that performed in Brown et al. (2020), we reiterate the same caveats: the conservative nature of our $n$ -gram matching procedure likely introduces additional false positives; there are no guarantees that the clean subset is drawn from the same distribution as the overall subset; and, accurately detecting test contamination is a relatively new research area without established best practices. Moreover, as our pretraining corpus is almost five times larger than that used for GPT-3 (which was 500B tokens), it is possible that there are more false positives in detecting dirty data. + +
DatasetMetricTotal countTotal acc/F1/BLEUClean countClean acc/F1/BLEU% clean% Diff (clean - overall)
DROPF19,53622.46133.00.647.4
SQuADv2F111,87341.310638.70.9-6.2
ANLI R1acc1,00048.11457.11.418.8
ANLI R2acc1,00042.92138.12.1-11.2
ReCoRDacc10,0004.63,2034.532.0-2.7
MultiRCacc4,84875.41,97275.740.70.5
PIQAacc1,83823.789623.348.7-1.7
ANLI R3acc1,20044.271845.359.82.5
HellaSwagacc10,04228.56,57828.765.50.7
RTEacc2,7784.118384.266.10.0
WMT'14 En→FrBLEU3,00331.32,24331.574.70.9
WMT'14 Fr→EnBLEU3,00334.02,24334.174.70.2
BoolQacc3,27076.52,51576.376.9-0.4
TQA (tfds-dev)F111,31362.28,73162.077.2-0.2
ARC Easyacc2,36579.51,88879.079.8-0.6
ARC Challengeacc1,16563.198364.284.41.7
OpenbookQAacc50074.642574.885.00.3
WMT'16 En→DeBLEU2,99922.72,56923.085.71.4
WMT'16 De→EnBLEU2,99938.62,56938.785.70.2
WMT'16 En→RoBLEU1,99915.51,75215.487.6-0.7
WMT'16 Ro→EnBLEU1,99936.71,75236.887.60.1
COPAacc10088.09187.991.0-0.1
CBacc5641.15341.594.61.1
NQF13,61024.53,49524.396.8-0.5
StoryClozeacc1,87192.11,86492.199.60.0
Winograndeacc1,26739.41,26539.499.80.2
+ +Table 5: Overlap statistics for the subset of datasets that are also used in GPT-3, sorted from dirtiest to cleanest. An evaluation example was dirty if it had any $n$ -gram collision with the pretraining corpus. We computed these results for FLAN's performance using only a single template for each dataset, so they differ slightly compared with the results for average performance over all templates. + +# D EXTENDED RELATED WORK + +# D.1 LANGUAGE MODELS AND MULTI-TASK LEARNING + +Our work is broadly inspired by a long line of prior work on language models for NLP applications (Dai & Le, 2015; Peters et al., 2018; Howard & Ruder, 2018; Radford et al., 2018; 2019, inter alia). Instruction tuning can be seen as a formulation of multitask learning (MTL), which is an established area within deep learning (Collobert et al., 2011; Luong et al., 2016; Ruder, 2017; Velay & Daniel, 2018; Clark et al., 2019b; Liu et al., 2019b, inter alia)—see Worsham & Kalita (2020) for a recent survey on MTL for NLP. Differing from prior MTL work which focuses on performance improvements across training tasks (Raffel et al., 2020; Aghajanyan et al., 2021) or to new domains (Axelrod et al., 2011), our work is motivated by improving zero-shot generalization to tasks that were not seen in training. + +# D.2 ZERO-SHOT LEARNING AND META-LEARNING + +Our work also falls in the well-established category of zero-shot learning, which has historically been used to refer to classifying instances among a set of unseen categories (Lampert et al., 2009; Romera-Paredes & Torr, 2015; Srivastava et al., 2018; Yin et al., 2019, inter alia). In NLP, zero-shot learning work also includes translating between unseen language pairs (Johnson et al., 2017; Pham et al., 2019), language modeling on unseen languages (Lauscher et al., 2020), as well as various NLP applications (Liu et al., 2019a; Corazza et al., 2020; Wang et al., 2021). Most recently, the emergent ability of language models (Brown et al., 2020) has led to increased interest in how models generalize to unseen tasks, the definition of zero-shot learning used in our paper. In addition, meta-learning (Finn et al., 2017; Vanschoren, 2018, inter alia) also broadly tries to train models that adapt quickly to unseen tasks, typically based on a few examples. + +# D.3 PROMPTING + +Instruction tuning leverages the intuition that language models at scale contain substantial world knowledge and can perform a range of NLP tasks (Brown et al., 2020, see also Bommasani et al. (2021)). Another line of work that shares this goal prompts models with continuous inputs optimized via backpropagation to substantially improve performance (Li & Liang, 2021; Lester et al., 2021; Qin & Eisner, 2021), as well as work that prompts models to produce specialized outputs (Wei et al., 2022). Although the success of these approaches depends heavily on model scale (Lester et al., 2021), for which large models can be costly to serve, the ability of a single large model to perform many tasks slightly eases this burden. As shown by our experiments in §4.5, prompt tuning is an orthogonal method for which instruction tuning can additionally improve performance. Reif et al. (2021) is similar to our work in that they also use related tasks to improve zero-shot learning, though they differ by only using related tasks in the context (and not finetuning), and focus on the application of text style transfer. + +Our work shares similar motivations with prompting in that we use inference-time text interactions to prompt a single model, without creating separate checkpoints for each task. Whereas prompting work such as GPT-3 uses prompt engineering to write prompts that intentionally mimic text that is likely to be seen during pretraining (e.g., for MultiRC GPT-3 tries a prompt that mimics a test with an answer key), we hope that finetuning models to respond to natural language instructions instead of completing a sentence will make such large models more accessible to non-technical users. + +# D.4 FINETUNING LARGE LANGUAGE MODELS + +Finetuning pretrained language models is a well-established method in NLP, with much of the work so far occurring on models in the range of 100M to 10B parameters (Dai & Le, 2015; Devlin et al., 2019; Raffel et al., 2020; Lewis et al., 2020, inter alia). For models of O(100B) parameters, recent work has finetuned task-specific models for program synthesis (Austin et al., 2021; Chen et al., 2021), summarization (Wu et al., 2021), as well as improved bias and fairness behavior (Solaiman & Dennison, 2021). In addition to the traditional "dense" models, sparse mixture of experts (MoE) models of up to more than 1T parameters have been trained and finetuned (Lepikhin et al., 2020; Fedus + +et al., 2021). Compared with this prior work that finetunes and evaluates on the same downstream task, our setup studies the effect of instruction tuning on ability to perform unseen tasks. + +# D.5 MULTI-TASK QUESTION ANSWERING + +The instructions we use for instruction tuning are similar to QA-based task formulation research, which aims to unify NLP tasks by casting them as question-answering over a context. For instance, McCann et al. (2018) cast ten NLP tasks as QA and train a model on a collection of tasks formulated with natural language prompts; they report transfer learning gains on finetuning tasks as well as zero-shot domain adaptation results on SNLI (Bowman et al., 2015) and Amazon/Yelp Reviews (Kotzias et al., 2015). While McCann et al. (2018) does not leverage unsupervised pre-training and only reports zero-shot transfer to unseen domains, our work uses a pretrained LM and focuses on zero-shot performance on unseen task clusters. UnifiedQA (Khashabi et al., 2020) shows similar transfer learning gains as McCann et al. (2018) across 20 datasets and reports good generalization to unseen tasks across four types of QA. Focusing on binary text classification, Zhong et al. (2021) finetune T5-770M on 43 tasks phrased as yes/no questions and study the zero-shot performance on unseen tasks. In comparison, our paper is much larger in scope, empirically demonstrating the idea on a wide range of tasks with a much larger model. Other work has used QA-based task formulation for more-targeted applications including semantic role labeling (He et al., 2015), relation extraction (Levy et al., 2017), coreference resolution (Wu et al., 2020) and named entity recognition (Li et al., 2020) as question answering. + +# D.6 INSTRUCTIONS-BASED NLP + +Recent improvements in the capabilities of language models have led to increased interest in a nascent area of instructions-based NLP (Goldwasser & Roth, 2014, and see McCarthy (1960)). Schick & Schütze (2021) (also see Gao et al., 2021; Tam et al., 2021) use task descriptions in cloze-style phrases to help language models assign soft labels for few-shot and semi-supervised learning, though this line of work finetunes new checkpoints for each downstream task. Efrat & Levy (2020) evaluated GPT-2 (Radford et al., 2019) on simple tasks ranging from retrieving the $n$ th word of a sentence to generating examples for SQuAD, concluding that GPT-2 performs poorly across all tasks. + +In terms of the setup of finetuning on a large number of tasks and evaluating on unseen tasks, two recent papers are similar to ours. Mishra et al. (2021) finetune BART (Lewis et al., 2020) using instructions and few-shot examples for tasks such as question answering, text classification, and text modification, and find that this few-shot finetuning with instructions improves performance on unseen tasks. Ye et al. (2021) introduce a setup for cross-task few-shot learning, finding that multi-task meta-learning using MAML (Finn et al., 2017) improves the few-shot capabilities of BART on unseen downstream tasks. Our work differs from these two papers in that we focus on zero-shot learning, for which we observe the crucial importance of model scale (FLAN is $1,000x$ larger than BART-base). + +Perhaps the papers most related to ours are the recent Sanh et al. (2021) and Min et al. (2021), which were released after our initial preprint. Min et al. (2021) finetunes GPT-2 Large (770M parameters) to be a few-shot learner, which is the same approach as our experiment in Section 4.3. Similar to our conclusions, they also observe that including few-shot exemplars and instruction tuning are complementary ways to improve performance. Sanh et al. (2021) propose to finetune T5-11B to respond to prompts, and they also report performance improvements on zero-shot learning. These two papers and our work all study finetuning with instructions, but, as noted by Min et al. (2021), it is hard to directly compare results, due to differing model sizes, model types (decoder-only vs encoder-decoder), pretraining data, task mixtures, and type of instructions (Sanh et al. (2021) say that their instructions are more diverse). + +Finally, OpenAI has a model called InstructGPT (Ouyang et al., 2022). InstructGPT uses human annotations to guide desired model behavior, both via finetuning and reinforcement learning, finding that InstructGPT is preferred by human raters compared with unmodified GPT-3. + +# E FREQUENTLY ASKED QUESTIONS + +# How do the FLAN instructions differ from GPT-3 or T5 prompts? + +GPT-3 prompting is done in a way such that the prompt looks like data that the model has been pretrained on, and the model finishes the continuation. T5 prompts are mostly just a tag for the dataset, which would not work in the zero-shot setting. In contrast, the prompts that we use for FLAN are similar to what would be used to ask a human to perform the task. + +For instance, given an input for an NLI task, these would be the prompts. + +# T5 prompt: + +cb hypothesis: At my age you will probably have learnt one lesson. +premise: It's not certain how many lessons you'll learn by your thirties. + +# GPT-3 prompt: + +At my age you will probably have learnt one lesson. +question: It's not certain how many lessons you'll learn by your thirties. true, false, or neither? answer: + +# FLAN prompt: + +Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis? + +So because FLAN prompts are formulated as responding to an instruction, they do not work well for pretrained language models without finetuning. Performance was near zero for most generation tasks. For instance, given the input "The dog runs." Translate this sentence to French," LaMDA-PT continues with "The dog runs after the cat" instead of actually translating the sentence. Hence, we used the established GPT-3 prompts for our LaMDA-PT baselines. + +# What are some limitations/failure cases of FLAN? + +While we qualitatively find that FLAN responds well to most tasks, it does fail on some simple tasks. For instance, as shown in Figure 22, FLAN fails at the very simple task of returning the second word in a sentence, and also incorrectly translates a question to Danish when asked to answer the question in Danish. Additional limitations include a context length of only 1024 tokens (which is not enough for most summarization tasks), and that the model was mostly trained on English data. + +# Can FLAN be used when large amounts of training data are available? + +In this work, we focus on cross-task generalization to zero-shot tasks, but we also believe that instruction tuning could result in positive task transfer among seen tasks, depending on the mixture of tasks (though we leave this for future work). In §4.5, where we apply prompt tuning to the FLAN checkpoint, we see promising results that indicate positive task transfer in a supervised setting. + +# Are the ten unique templates per dataset or per task cluster? + +The ten unique templates are for each dataset and not for a task cluster. This is because datasets in the same task cluster often differed slightly (e.g., "is this movie review positive" vs "is this yelp review positive"). + +# In Figure 7A, why does the untuned LaMDA-PT model see worse performance with more parameters for reading comprehension and sentiment analysis? + +For context, Figure 7A is a check of correctness for Figure 7B. Figure 7A confirms that scale improves performance for tasks that were seen during instruction tuning, as expected. The untuned LaMDA-PT model performance in Figure 7A is shown just for completeness. + +Nonetheless, the fact that scale does not always improve zero-shot performance of untuned LaMDA-PT is an interesting artifact. Initially, we were surprised, because Brown et al. (2020) shows that scale improves performance across a large number of tasks in aggregate. + +It turns out that scale does not improve performance for certain tasks. This is especially true for zero-shot learning, and we think that this happens to be the case for the reading comprehension and sentiment analysis tasks we evaluate. The GPT-3 paper itself similarly reports that zero-shot performance on BoolQ and DROP decreases from 13B to 175B parameters. The GPT-3 paper does not show results on sentiment analysis, but Holtzman et al. (2021) find that zero-shot performance on SST-2 also gets worse from 13B to 175B parameters. Hence, this artifact is consistent across both GPT-3 and the models we use. + +This artifact is certainly worth further study, but is outside the scope of instruction tuning. Ideally, we would have performed the Figure 7 ablation with cross-validation instead of a single split, which likely would have smoothed out that artifact. + +# F QUALITATIVE EXAMPLES + +This section shows qualitative examples of FLAN responding to various prompts. + +![](images/a19f3cea9ea4f331516223d944d91776255e6b91142fa788a2ddac63bf38aeae.jpg) +Figure 13: For sentiment analysis, FLAN changes the answer appropriately when the question is flipped. + +![](images/195ff6ab80ecd33c7d614e6f2c859df003cb0fa9d308bf5a293be10bcd37cd33.jpg) + +![](images/d0bf2b1eee689535ef4b9c301257abd6aa3b673a62ac23923160e2cc0c04d1a3.jpg) +Figure 14: For question answering, FLAN can answer a question in another language when instructed to do so. Multiple FLAN outputs are generated via random sampling with a temperature of 0.9 and top $k$ of 40. + +![](images/6c3b2fb869387e39fad79ae3922590779e3d2c0c4583b424201c9d9c38cbe7a9.jpg) + +![](images/247e3cf4445a89e7148d2dda2e8c0e05a3218d71b44820b66587eab25f579f0f.jpg) +Figure 15: FLAN can perform zero-shot rule-based manipulations. +Figure 16: FLAN can make zero-shot recommendations. Multiple FLAN outputs are generated via random sampling with a temperature of 0.9 and top $k$ of 40. + +# FLAN output: + +John was reading the paper "Finetuned Language Models are Zero-Shot Learners." + +# Input: + +John is reading the paper "Finetuned Language Models are Zero-Shot Learners." Change to passive voice. + +# FLAN output: + +The paper "Finetuned Language Models are Zero-Shot Learners" is being read by John. + +# Input: + +John is reading the paper "Finetuned Language Models are Zero-Shot Learners." Change to the verb to eat. + +# FLAN output: + +John is eating the paper "Finetuned Language Models are Zero-Shot Learners." + +# Input: + +John is reading the paper "Finetuned Language Models are Zero-Shot Learners." Change the verb to eat and the subject to Noam Chomsky. + +# FLAN output: + +Noam Chomsky is eating the paper "Finetuned Language Models are Zero-Shot Learners." + +# Input: + +Recommend fun activities to do on a rainy night in Seattle. + +# FLAN output: + +- Go to the pacific science center +- Go out to eat +- Go to a comedy club +- Build a blanket fort +- Go to the Space Needle +- Go to underground tours +Find a karaoke bar + +# Input: + +Recommend activities to do on a sunny weekend in Copenhagen. + +# FLAN output: + +- Visit Christiania +Copenhagen Zoo +- Visit the Tivoli Gardens +Take a canal tour +Museums +- Visit The Little Mermaid Statue +- Visit Copenhagen Zoo +- Visit the Danish Maritime Museum + +# Input: + +Generate utterances with the intent "get COVID vaccine" + +# FLAN output: + +- I would like to get the covid vaccine. +- I would like a COVID vaccine as soon as possible +- Can I get a COVID vaccine as a preventative measure? +Get COVID vaccine for a five year old + +- Where is a center for COVID vaccine testing? + +# Input: + +Generate utterances with the intent "where to vote in the state Senate election" + +# FLAN output: + +- Tell me where to vote in the state senate election +Find out where to vote in the state senate election +- Where am i supposed to vote in the state senate election +- Where can i get information on where to vote in the state Senate election? + +Figure 17: FLAN can be used in a zero-shot fashion to generate data, such as utterances that are consistent with a given intent. Multiple FLAN outputs are generated via random sampling with a temperature of 0.9 and top $k$ of 40. + +# Input: + +Suggest a related search term to "watermelon" + +# FLAN output: + +Cantaloupe +Cucumber +Melon +Fruit +Watermelon juice +Watermelons +- Lemon +Apple + +# Input: + +Suggest a related search term to "Novak Djokovic" + +# FLAN output: + +- novak djokovic girlfriend +- *serena williams* +- rafael nadal +- novak djokovic wife +roger federer +- andy murray +- novak djokovic news +- novak djokovic serbia +- novak djokovic wimbledon + +Figure 18: FLAN can be used for zero-shot query expansion. Multiple FLAN outputs are generated via random sampling with a temperature of 0.9 and top $k$ of 40. + +# Input: + +On the way to Yosemite, our front tire blew. + +Finish writing the story, and include a happy ending. + +# FLAN output (cherrypicked): + +- We got our tire replaced. The mechanic was honest, kind and helpful. Then we had a smooth ride for the rest of the trip. Our trip was filled with fun and excitement. +- We waited three hours for the tow truck, but had a great picnic. The tow truck driver helped us put the spare on, and we arrived in Yosemite for our reservations dinner, with time to spare. + +# Input: + +"I guess I need to do laundry." + +Rewrite the above sentence in a Shakespearean style. + +# FLAN output: + +- To my mind, verily, I suppose 'tis time, I wash my clothes. +- Mine eyes and ears and tongue declare me idle; and yet it is true I guess, By my garments, that I need to do laundry +- Lo my duty now calls To thee, to wash this body's clothes. + +Figure 19: FLAN can perform zero-shot tasks relevant to assisted-writing applications. Multiple FLAN outputs are generated via random sampling with a temperature of 0.9 and top $k$ of 40. + +# Input: + +Make up a new word that means "when bananas shower from the sky" + +# FLAN output: + +- Bananation +- Banana shower +- Banania +- Bananafall +- Bananada + +# Input: + +Make up a word that means "like a pipedream" + +# FLAN output (cherrypicked): + +- Pipedreamic +- Pipedreamish +- Piperdreamlike +- Phantasmagoric + +Figure 20: FLAN can be used for zero-shot word formation. Multiple FLAN outputs are generated via random sampling with a temperature of 0.9 and top $k$ of 40. + +# Input: + +John moved to mountain view. He spent friday night in San Francisco. He got a COVID vaccine. He voted in the state senate election. + +Write a story that follows the above plot. + +# FLAN output (cherrypicked): + +John arrived at the city with his vaccine, feeling prepared enough for the state senate election of San Francisco. He had voted for the first time last night and was feeling good about it. He felt more prepared than ever to win, and win big, because he was getting a COVID vaccine. He knew that his opponent had not gotten a COVID vaccine, and he knew he had already defeated him before the election began. + +# Input: + +Write a sad story about carrots. + +# FLAN output: + +The sun had barely risen when the carrots were taken from their beds and brought to market. They'd had no time to gather their strength. They smelled of earth, their greens still wet from the dew. At first no one was interested in buying them. But one man finally reached out and picked up one of the carrots. "I'll take this one!" he cried out. "It's for my mother! I'll pay you well for it." The man took the carrot to the cashier, eager to get it home to his mother. He was careful to wrap it carefully so that it wouldn't get bruised or dirty. He hurried home, and when he saw his mother, he gave her the carrot. "Is this one of the ones I grew?" she asked, her voice wavering with emotion. "Yes, mother," he replied. "It is, dear." The man left the carrot with his mother and hurried back to market. His mother was very sick, and he was afraid she wouldn't last much longer. He needed to buy all the carrots in market so that none would be left when she was gone. + +Figure 21: Open-ended generation tasks by FLAN. The carrot story was from sampling sixteen outputs with a minimum length of 150 and choosing the highest probability output. + +# Input: + +Conan went to the store to buy some pencils. + +What is the second word in this sentence? + +# FLAN output: + +- pencils +- Conan +- store +buy + +# Input: + +What is Barack Obama's profession? + +Answer in Danish + +# FLAN output: + +- Hvad er Barack Obamas beskaftigelse? +- Hvad er Barack Obamas erherv? + +Figure 22: Example failure cases for FLAN. Left: FLAN fails to perform a simple task of returning the $n$ th word. Right: FLAN translates a question instead of answering it. Multiple FLAN outputs are generated via random sampling with a temperature of 0.9 and top $k$ of 40. + +# CHANGES FROM V4 TO V5 + +- Replaced the tables in the main figure with a figure, which takes up less space and focuses on zero-shot performance. +- Added GLaM 64B/64E as a baseline. +- Moved the ablation about the role of instructions, as well as prompt tuning, into the main paper (and condensed the figures). + +# CHANGES TO V4 FROM V3 + +- We added a Frequently Asked Questions section (Appendix E). +- We added a section with qualitative examples (Appendix F). +- We added an additional ablation study on the role of instructions during finetuning (Appendix B.2). +- We updated the related work (Appendix D) with manuscripts posted on arxiv since our initial preprint. + +# CHANGES TO V3 FROM V2 + +- The number of tokens used in pretraining was corrected from 2.81T to 2.49T tokens. + +# CHANGES TO V2 FROM V1 + +- We updated the terminology to "datasets" and "task clusters." +- We renamed the previous "open-domain QA" task cluster to "closed-book QA." +- We extended the related work section and moved it to the Appendix D, using a shorter version in the main body. +- We added FLAN and LaMDA-PT results for additional datasets for which GPT-3 results were not reported. +- For TriviaQA, v1 reported results on the tfds dev set of 11,313 examples. GPT-3 actually evaluates on the wikipedia dev set of 7,993 examples, so we ran an additional evaluation on that dev set in order to compare with GPT-3's performance. Zero-shot FLAN now beats zero-shot GPT-3 on that task (and therefore on 20 of 25 tasks). We still show the original result in Table 2, though there is no GPT-3 result to compare with. +- We moved commonsense reasoning and coreference resolution from the main body to the Appendix. +- We moved prompt tuning from the main body to §4.5. +- We added data contamination analysis (Appendix C). +- We added few-shot instruction tuning (§4.4). +We cited additional datasets in Appendix G. +- The number of tokens used in pretraining was corrected from 2.81T to 2.49T tokens. + +# G TASKS AND DATASETS + +This appendix further details the datasets that we use in this paper. We group datasets into one of the following task clusters: + +- Natural language inference concerns how two sentences relate, typically asking, given a first sentence, whether a second sentence is true, false, or possibly true. We use the following datasets: + +1. ANLI (Nie et al., 2020) +2. CB (De Marneffe et al., 2019) +3. MNLI (Williams et al., 2018) +4. QNLI (Rajpurkar et al., 2018) +5. SNLI (Bowman et al., 2015) +6. WNLI (Levesque et al., 2012) +7. RTE (Dagan et al., 2005; Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009) + +- Reading comprehension tests the ability to answer a question when given a passage that contains the answer. We use the following datasets: + +1. BoolQ Clark et al. (2019a) +2. DROP (Dua et al., 2019) +3. MultiRC (Khashabi et al., 2018) +4. OBQA (Mihaylov et al., 2018) +5. SQuADv1 (Rajpurkar et al., 2016) +6. SQuADv2 (Rajpurkar et al., 2018) + +- Commonsense reasoning evaluates the ability to perform physical or scientific reasoning with an element of common sense. We use the following datasets: + +1. COPA (Roemmele et al., 2011) +2. HellaSwag (Zellers et al., 2019) +3. PiQA (Bisk et al., 2020) +4. StoryCloze (Mostafazadeh et al., 2016) + +- Sentiment analysis is a classic NLP task aims to understand whether a piece of text is positive or negative. We use the following datasets: + +1. IMDB (Maas et al., 2011) +2. Sentiment140 (Go et al., 2009) +3. SST-2 (Socher et al., 2013) +4. Yelp (Fast.AI) + +- Closed-book QA asks models to answer questions about the world without specific access to information that contains the answer. We use the following datasets: + +1. ARC (Clark et al., 2018) +2. NQ (Lee et al., 2019; Kwiatkowski et al., 2019) +3. TriviaQA Joshi et al. (2017) + +- Paraphrase detection asks a model to determine whether two sentences are semantically equivalent.4 We use the following datasets: + +1. MRPC (Dolan & Brockett, 2005) +2. QQP (Wang et al., 2018, see) +3. Paws Wiki (Zhang et al., 2019) + +- Coreference resolution tests the ability to identify expressions of the same entity in some given text. We use the following datasets: + +1. DPR (Rahman & Ng, 2012) +2. Winogrande (Sakaguchi et al., 2020) + +3. WSC273 (Levesque et al., 2012) + +- Reading comprehension with commonsense combines elements of both reading comprehension with commonsense. We use the following datasets: + +1. CosmosQA (Huang et al., 2019) +2. ReCoRD (Zhang et al., 2018) + +- Struct to text tests the ability to describe some structured data using natural language. We use the following datasets: + +1. CommonGen (Lin et al., 2020) +2. DART (Nan et al., 2021) +3. E2ENLG (Dušek et al., 2019) +4. WebNLG (Gardent et al., 2017) + +- Translation is the task of translating text from one language into a different language. We use the following datasets: + +1. En-Fr from WMT'14 (Bojar et al., 2014) +2. En-De, En-Tr, En-Cs, En-Fi, En-Ro, and En-Ru from WMT'16 (Bojar et al., 2016) +3. En-Es from Paracrawl (Bañón et al., 2020) + +- Summarization asks models to read a piece of text and generate an abbreviated summary of it. We use the following datasets: + +1. AESLC (Zhang & Tetreault, 2019) +2. CNN-DM (See et al., 2017) +3. Gigaword (Napoles et al., 2012) +4. MultiNews (Fabbri et al., 2019) +5. Newsroom (Grusky et al., 2018) +6. Samsum (Gliwa et al., 2019) +7. XSum (Narayan et al., 2018) +8. AG News (Zhang et al., 2015) +9. Opinion Abstracts - Rotten Tomatoes (Wang & Ling, 2016) +10. Opinion Abstracts - iDebate (Wang & Ling, 2016) +11. Wiki Lingua English (Ladhak et al., 2020) + +Additional datasets that we assign to a miscellaneous task cluster include: + +1. Conversational question-answering: QuAC (Choi et al., 2018) and CoQA (Reddy et al., 2019) +2. Evaluating context-sentence word meanings: WiC (Pilehvar & Camacho-Collados, 2019) +3. Question classification: TREC (Li & Roth, 2002; Hovy et al., 2001) +4. Linguistic acceptability: CoLA (Warstadt et al., 2019) +5. Math questions (Saxton et al., 2019) + +For all tasks, our finetuning and evaluation code uses tensorflow datasets (TFDS) to load and process datasets. Regarding the number of training examples per dataset, we limited the training set size per dataset to 30,000 so that no dataset dominated the finetuning distribution. When a test set with labels was available in TFDS, we used it; otherwise, we used the TFDS validation set as our test set, splitting the training set into a train and dev set. + +On the following pages, we show inputs and outputs for evaluation tasks where we compared with GPT-3. See the attached supplementary material for the templates for all other datasets. + +# G.1 NATURAL LANGUAGE INFERENCE + +# INPUT + +Joey Heindle (born 14 May 1993 in Munich) is a German singer. He is best known for winning the seventh season of the game show Ich bin ein Star - Holt mich hier raus! and finishing in 5th place in season 9 of Deutschlanducht den Superstar, despite universally negative reviews from the jury each week. + +Based on the paragraph above can we conclude that "Joey Heindle was highly disliked by people on television."? + +# OPTIONS: + +-Yes +- It's impossible to say +- No + +# TARGET + +Yes + +Table 6: Example input and target for Adversarial NLI (ANLI). ANLI (Nie et al., 2020) is a large-scale NLI benchmark with adversarial examples collected iteratively with a human and model in the loop. The task is to determine whether a hypothesis is entailed by a premise (entailment, not entailment, or impossible to say). There are three rounds, R1-R3. Of the three training sets with 16,946, 45,460, and 100,459 examples, we use 16,946, 30,000, and 30,000 for train and 200 from each of the three TFDS validation sets for dev. We use the TFDS "test" sets of 1,000, 1,000, and 1,200 examples as our test set for reporting numbers. + +# INPUT + +A: so I watch the fish, you know. Whatever I can do to keep myself occupied. I like to have the TV on, because that usually keeps me, um, more occupied. It kind of takes the time away and I don't realize, that's really the only time I ever watch TV, is when I'm on the bike. and then usually after I'm done riding the bike, just to cool myself down, I usually take a walk, you know, and that just kind of uh, gets me, you know, to where I'm not quite as tired I guess. But it's definitely a task. B: You think so? A: I can't say that I really enjoy it. + +Based on the paragraph above can we conclude that "she really enjoys it"? + +# OPTIONS: + +-Yes +- No +- It's impossible to say + +# TARGET + +No + +Table 7: Example input and target for Commitment Bank (CB). CB (De Marneffe et al., 2019) is a corpus of texts in which a hypothesis is extracted from a premise, and the task is to determine whether the hypothesis is entailed by the premise (entailment, not entailment, or impossible to say). Of the training set with 250 examples, we use 200 for train and 50 for dev. We use the TFDS validation set of 56 examples as our test set for reporting numbers. + +
INPUT +After years of study, the Vatican's doctrinal congregation has sent church leaders a confidential document concluding that "sex-change" procedures do not change a person's gender in the eyes of the church.
Based on the paragraph above can we conclude that "Sex-change operations become more common."?
OPTIONS: +- yes +- no
TARGET +no
+ +Table 8: Example input and target for Recognizing Textual Entailment (RTE). RTE (Dagan et al., 2005; Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009) asks whether a second sentence is entailed by a first (binary, either entailed or not entailed). Of the training set with 2490 examples, we use 2,290 for train and 200 for dev. We use the TFDS validation set of 277 examples as our test set for reporting numbers. + +# G.2 READING COMPREHENSION + +# INPUT + +There are four ways an individual can acquire Canadian citizenship: by birth on Canadian soil; by descent (being born to a Canadian parent); by grant (naturalization); and by adoption. Among them, only citizenship by birth is granted automatically with limited exceptions, while citizenship by descent or adoption is acquired automatically if the specified conditions have been met. Citizenship by grant, on the other hand, must be approved by the Minister of Immigration, Refugees and Citizenship. + +Can we conclude that can i get canadian citizenship if my grandfather was canadian? + +# OPTIONS: + +- no +- yes + +# TARGET + +no + +Table 9: Example input and target for Boolean Questions (BoolQ). BoolQ Clark et al. (2019a) asks a yes/no question based on a passage and a question. Of the training set with 9,427 examples, we use 9,227 for train and 200 for dev. We use the TFDS validation set of 3,270 examples as our test set for reporting numbers. + +# INPUT + +Imagine you are standing in a farm field in central Illinois. The land is so flat you can see for miles and miles. On a clear day, you might see a grain silo 20 miles away. You might think to yourself, it sure is flat around here. If you drive one hundred miles to the south, the landscape changes. In southern Illinois, there are rolling hills. Why do you think this is? What could have caused these features? There are no big rivers that may have eroded and deposited this material. The ground is capable of supporting grass and trees, so wind erosion would not explain it. To answer the question, you need to go back 12,000 years. Around 12,000 years ago, a giant ice sheet covered much of the Midwest United States. Springfield, Illinois, was covered by over a mile of ice. Its hard to imagine a mile thick sheet of ice. The massive ice sheet, called a glacier, caused the features on the land you see today. Where did glaciers go? Where can you see them today? Glaciers are masses of flowing ice. + +Question: "How big were the glaciers?" + +Response: "One mile" + +Does the response correctly answer the question? + +# OPTIONS: + +- no + +- yes + +# TARGET + +yes + +Table 10: Example input and target for Multi-Sentence Reading Comprehension (MultiRC). MultiRC Khashabi et al. (2018) asks an open-ended question given a paragraph that contains the answer. Of the training set with 27,243 examples, we use 27,043 for train and 200 for dev. We use the TFDS validation set of 4,848 examples as our test set for reporting numbers. + +
INPUT
soil is a renewable resource for growing plants
A plant that needs to expand will be able to have an endless resource in
OPTIONS:
- dirt
- pesticides
- pay
- beans
+ +TARGET dirt + +Table 11: Example input and target for Openbook Question Answering (OBQA). OBQA (Mihaylov et al., 2018) asks 4-way multiple choice questions based facts. Of the training set with 4,957 examples, we use all for train and 200 in the TFDS validation set of 500 examples for dev. We use the TFDS test set of 500 examples as our test set for reporting numbers. + +# G.3 COMMONSENSE REASONING + +
INPUT +I packed up my belongings. What is the cause?
OPTIONS: +- I was hunting for a new apartment. +- I was moving out of my apartment.
TARGET +I was moving out of my apartment.
+ +Table 12: Example input and target for Choice of Plausible Alternatives (COPA). COPA (Roemmle et al., 2011) is a causal reasoning task that asks to infer either a cause of effect of a premise from two choices. Of the training set with 400 examples, we use 350 for train and 50 for dev. We use the TFDS validation set of 100 examples as our test set for reporting numbers. + +
INPUT +What happens next in this paragraph?
Once the rope is inside the hook, he begins moving up the wall but shortly after he stops and begins talking. The male then begins talking about the clip again and goes back up the wall. as he +OPTIONS: +- progresses, there are hooks everywhere on the wall and when he gets near them, he puts his rope inside of it for support and safety. +- changes time, an instant replay of his initial move is shown a second time. +- continues to talk, another male speaks about the move and shows another closeup of the plex by the male. +- continues, other people start to arrive and begin to hang out with him as he makes a few parts of the rope.
TARGET +progresses, there are hooks everywhere on the wall and when he gets near them, he puts his rope inside of it for support and safety.
+ +Table 13: Example input and target for Commonsense Sentence Completion (HellaSwag). HellaSwag (Zellers et al., 2019) tests for sentence completion that requires common sense, asking for the most probable ending given four contexts. Of the training set with 39,905 examples, we use 30,000 for train and 200 for dev. We use the TFDS validation set of 10,042 examples as our test set for reporting numbers. + +# INPUT + +Here is a goal: Remove smell from garbage disposal. + +How would you accomplish this goal? + +# OPTIONS: + +- Create soda ice cubes and grind through disposal. +- Create vinegar ice cubes and grind through disposal. + +# TARGET + +Create vinegar ice cubes and grind through disposal. + +Table 14: Example input and target for Physical Question Answering (PiQA). PiQA (Bisk et al., 2020) is a commonsense QA benchmark for naive physics reasoning, where a solution to a goal must be selected from two choices. Of the training set with 16,113 examples, we use 16,013 for train and 100 for dev. We use the TFDS validation set of 1,838 examples as our test set for reporting numbers. + +# INPUT + +Caroline never drinks carbonated beverages. Her friends pick on her because of it. One day they challenged her to drink a soda. Caroline wanted to win the challenge. + +# Predict the next sentence. + +# OPTIONS: + +- Caroline refused to open the soda. +- Caroline opened the soda and drank it all in one gulp! + +# TARGET + +Caroline opened the soda and drank it all in one gulp! + +Table 15: Example input and target for The Story Cloze Test (StoryCloze). StoryCloze (Mostafazadeh et al., 2016) is a commonsense reasoning framework for story generation, where a system chooses the correct ending to a four-sentence story. We use the 2016 version on TFDS. Of the validation set with 1,871 examples (no training set is available), we use 1,671 for train and 200 for dev. We use the TFDS test set of 1,871 examples as our test set for reporting numbers. + +# G.4 CLOSED-BOOK QA + +# INPUT + +What season is the Northern Hemisphere experiencing when it is tilted directly toward the Sun? + +# OPTIONS: + +- fall +- winter +- spring +- summer + +# TARGET + +summer + +Table 16: Example input and target for The AI2 Reasoning Challenge (ARC). ARC (Clark et al., 2018) asks grade-school level 4-way multiple choice science questions. There is a challenge set and an easy set, where the challenge set questions were answered incorrectly by both a retrieval-based algorithm and a co-occurrence algorithm. Of the training sets with 1,119 examples (challenge) and 2,251 (easy), we use we use 919 and 2,051 respectively for train and 200 each for dev. We use the TFDS test sets of 1,172 and 2,376 examples respectively as our test set for reporting numbers. + +# INPUT + +Question: who is the girl in more than you know?? + +Answer: + +# TARGET + +Romi Van Renterghem. + +Table 17: Example input and target for Natural Questions (Open) (NQ). NQ (Lee et al., 2019; Kwiatkowski et al., 2019) asks for an open-ended answer given a question, where all questions can be answered using the contents of Wikipedia. Of the training set of 87,925 examples, we use 30,000 for train and 200 for dev. We use the TFDS validation set of 3,610 examples as our test set for reporting numbers. + +# INPUT + +Please answer this question: Henry Croft, an orphan street sweeper who collected money for charity, is associated with what organised charitable tradition of working class culture in London, England? + +# TARGET + +pearly kings and queens + +Table 18: Example input and target for Trivia Question Answering (TriviaQA). TriviaQA Joshi et al. (2017) includes question-answer pairs authored by trivia enthusiasts. Of the training set of 87,622 examples, we use 30,000 for train and 200 for dev. We use 7,993 examples from Wikipedia of the 11,313 examples in the TFDS validation set, which is the same validation set used in (Brown et al., 2020). as our test set for reporting numbers. + +# G.5 COREFERENCE RESOLUTION + +
INPUT +How does the sentence end? +Elena wanted to move out of her parents fast but Victoria wanted to stay for a while, +OPTIONS: +- Elena went to school. +- Victoria went to school.
TARGET +Victoria went to school.
+ +Table 19: Example input and target for Adversarial Winograd Schema Challenge (Winogrande). Winogrande (Sakaguchi et al., 2020) tests for coreference resolution by asking a model to fill in a masked token in a sentence by choosing an entity from two options. Of the $40.4\mathrm{k}$ examples in the XL training set, we use 30,000 for train and 200 for dev. We use the TFDS validation set of 1,267 as our test set for reporting numbers. + +
INPUT +Jane knocked on Susan's door, but there was no answer. +OPTIONS: +- Jane was out. +- Susan was out.
TARGET +Susan was out.
+ +Table 20: Example input and target for Winograd Schema Challenge (WSC273). WSC273 (Levesque et al., 2012) tests for coreference resolution by asking a model to complete the sentence in a fashion that requires understanding the entities in the sentence. Of the 0 examples in the training set (WSC273 is test-set only), we use none for train and none for dev. We use the TFDS test set as our test set for reporting numbers. + +# G.6 READING COMPREHENSION WITH COMMONSENSE + +# INPUT + +Complete the passage. + +(CNN) – At first glance, "The Flat" might seem like an episode of "Hoarders," Israeli-style. The documentary film opens after an elderly woman dies in Tel Aviv. Her grandchildren assemble to clean out her apartment, packed with dusty books, vintage clothing (dozens of pairs of fancy gloves, for instance), enough purses to stock a department store, jewelry, mementoes and closets full of knickknacks. But buried among the detritus they chance upon something remarkable – mysterious papers linking the grandparents to an important Nazi figure. How could such ardent Zionists, who left their native Germany in the early 1930s, have been involved with an SS official like Leopold von Mildenstein? + +What I found out was this journey, the Nazi ( + +# OPTIONS: + +- Arnon Goldfinger) and his wife were accompanied by my grandparents," Goldfinger told CNN. +- CNN) and his wife were accompanied by my grandparents," Goldfinger told CNN. +- Germany) and his wife were accompanied by my grandparents," Goldfinger told CNN. +- Israeli) and his wife were accompanied by my grandparents," Goldfinger told CNN. +- Leopold von Mildenstein) and his wife were accompanied by my grandparents," Goldfinger told CNN. +- Nazi) and his wife were accompanied by my grandparents," Goldfinger told CNN. +- SS) and his wife were accompanied by my grandparents," Goldfinger told CNN. +- Tel Aviv) and his wife were accompanied by my grandparents," Goldfinger told CNN. +- The Flat) and his wife were accompanied by my grandparents," Goldfinger told CNN. +- Zionists) and his wife were accompanied by my grandparents," Goldfinger told CNN. + +# TARGET + +Leopold von Mildenstein) and his wife were accompanied by my grandparents," Goldfinger told CNN. + +Table 21: Example input and target for Reading Comprehension with Commonsense Reasoning (ReCoRD). ReCoRD (Zhang et al., 2018) asks for the answer to a cloze-style question where an entity is masked out. Of the training set of 100,730 examples, we use 30,000 for train and 200 for dev. We use the TFDS validation set of 10,000 examples as our test set for reporting numbers. + +# G.7 TRANSLATION (7 LANGUAGES) + +# INPUT + +Here the largest town of the district is located: Nordenham, lying opposite to Bremerhaven at the Weser mouth. + +Translate to German + +# TARGET + +An der B 211 befindet sich in Loyermoor der so genannte "Geest-Abbruch", der eine Höhendifferenz von gut 30 Meter überbrachte. + +Table 22: Example input and output for translation. This example is from WMT'16 English-German; all languages use the same translation templates. \ No newline at end of file diff --git a/finetunedlanguagemodelsarezeroshotlearners/images.zip b/finetunedlanguagemodelsarezeroshotlearners/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..db8bd8bb885941aad35afcfaf2da6aad9ddc88c9 --- /dev/null +++ b/finetunedlanguagemodelsarezeroshotlearners/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff56c6049b33c402f5c4c0122d02c8143f74496feb73148ab2938e67c90862b0 +size 1479939 diff --git a/finetunedlanguagemodelsarezeroshotlearners/layout.json b/finetunedlanguagemodelsarezeroshotlearners/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..902824138ae29ff475105ad949e98d143e5023f5 --- /dev/null +++ b/finetunedlanguagemodelsarezeroshotlearners/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e0b96df07fe9cb1ed4891dd4a18642f0e7fb221635619e0f21e988e5c6af868d +size 1221726 diff --git a/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/8a158539-7459-41f5-b3c6-8073714cac2b_content_list.json b/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/8a158539-7459-41f5-b3c6-8073714cac2b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5c6eb469915c9d220c6e88a2ad40eab8a03239c3 --- /dev/null +++ b/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/8a158539-7459-41f5-b3c6-8073714cac2b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:626df712b50eaed3be924107563d5ba14d410fdc2cb7dc79aacf9aa6c4f96069 +size 322603 diff --git a/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/8a158539-7459-41f5-b3c6-8073714cac2b_model.json b/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/8a158539-7459-41f5-b3c6-8073714cac2b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ff7fb5821923f97fe803510329c4fec31fb51dd7 --- /dev/null +++ b/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/8a158539-7459-41f5-b3c6-8073714cac2b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1762300a0b6bfa9de97ef307fe5a4b42f4899917051399b47bf681b72b782b0d +size 373679 diff --git a/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/8a158539-7459-41f5-b3c6-8073714cac2b_origin.pdf b/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/8a158539-7459-41f5-b3c6-8073714cac2b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7dcfea040cac841a67a6a1a9886a19004ed47133 --- /dev/null +++ b/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/8a158539-7459-41f5-b3c6-8073714cac2b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00e24b6fd476c85650018dbea1d7e4720b9c57b543549cd95cb319d22048133b +size 678931 diff --git a/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/full.md b/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/full.md new file mode 100644 index 0000000000000000000000000000000000000000..cad459aa4feacbbdab6887246df7110735c6b213 --- /dev/null +++ b/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/full.md @@ -0,0 +1,1654 @@ +# FINE-TUNING CAN DISTORT PRETRAINED FEATURES AND UNDERPERFORM OUT-OF-DISTRIBUTION + +Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, Percy Liang +Stanford University, Computer Science Department + +# ABSTRACT + +When transferring a pretrained model to a downstream task, two popular methods are full fine-tuning (updating all the model parameters) and linear probing (updating only the last linear layer—the "head"). It is well known that fine-tuning leads to better accuracy in-distribution (ID). However, in this paper, we find that fine-tuning can achieve worse accuracy than linear probing out-of-distribution (OOD) when the pretrained features are good and the distribution shift is large. On 10 distribution shift datasets (BREEDS-Living17, BREEDS-Entity30, DomainNet, CIFAR $\rightarrow$ STL, CIFAR-10.1, FMoW, ImageNetV2, ImageNet-R, ImageNet-A, ImageNet-Sketch), fine-tuning obtains on average $2\%$ higher accuracy ID but $7\%$ lower accuracy OOD than linear probing. We show theoretically that this tradeoff between ID and OOD accuracy arises even in a simple setting: fine-tuning overparameterized two-layer linear networks. Our analysis suggests that the easy two-step strategy of linear probing then full fine-tuning (LP-FT), sometimes used as a fine-tuning heuristic, combines the benefits of both fine-tuning and linear probing. Empirically, LP-FT outperforms both fine-tuning and linear probing on the above datasets ( $1\%$ better ID, $10\%$ better OOD than full fine-tuning). + +# 1 INTRODUCTION + +Pretraining a model on a large dataset before transferring to a downstream task's training data substantially improves accuracy over training from scratch—for example, pretraining a ResNet-50 on unlabeled ImageNet boosts accuracy on CIFAR-10 from $94\%$ to $98\%$ (Chen et al., 2020a;b). High-stakes applications such as poverty mapping in under-resourced countries (Jean et al., 2016), self-driving cars (Yu et al., 2020), and medical diagnosis (AlBadawy et al., 2018), require models that also generalize to circumstances not seen in the training distribution. In addition to testing on data drawn from the downstream task's training distribution (in-distribution; ID), it is increasingly important to test on data distributions unseen during training (out-of-distribution; OOD). + +After initializing with a pretrained model, two popular transfer methods are fine-tuning (running gradient descent on all the model parameters), and linear probing (tuning the head but freezing lower layers). In the ID setting it is well known that fine-tuning leads to better accuracy than linear probing (Kornblith et al., 2019; Zhai et al., 2020; He et al., 2020), and even when testing OOD, prior work usually fine-tunes all parameters of their model (Hendrycks et al., 2019a; Miller et al., 2021; Andreassen et al., 2021). Intuitively, fine-tuning all layers of a network can improve pretrained features by adapting them to the specific task, while linear probing freezes these features. + +In this work, we investigate the OOD accuracy of fine-tuning and linear probing and find that surprisingly, fine-tuning can do worse than linear probing in the presence of a large distribution shift. We experiment on ten distribution shift benchmarks (BREEDS Living17, BREEDS Entity30, DomainNet, CIFAR $\rightarrow$ STL, CIFAR10.1, FMoW Geo-shift, ImageNetV2, ImageNet-R, ImageNet-A, ImageNet-Sketch), initializing with good pretrained features from MoCo-v2 (Chen et al., 2020b) and CLIP (Radford et al., 2021). While both methods offer gains over training from scratch, fine-tuning improves the average ID accuracy relative to linear probing from $83\%$ to $85\%$ but brings down the OOD accuracy from $66\%$ to $59\%$ (Figure 1). + +When and why does fine-tuning underperform linear probing? We theoretically consider fine-tuning a two-layer linear network in an overparameterized regression setting where the feature extractor layer has been pretrained to map high-dimensional inputs to useful, lower-dimensional, features. We prove that fine-tuning is worse than linear probing on directions outside the span of the training data when using "good" pretrained features. Even with an infinitesimally small learning rate, fine-tuning distorts pretrained features—the features of ID training data are updated while those of OOD data + +![](images/d218aeed8b185ea68a2f3fa87bb190164fb500e5f7140973e8a0da6828cb3e9a.jpg) +Figure 1: Given a good feature extractor (top-left), a randomly initialized head is added to map features to outputs and we can (a) fine-tune all the model parameters or (b) linear probe, which freezes the feature extractor and trains only the head. We run experiments on ten distribution shifts. Fine-tuning does well when the test example is sampled from the fine-tuning distribution (ID), but can underperform on test examples sampled from OOD distributions (when the distribution shift is large). (c) Our theory indicates that fine-tuning can distort the pretrained feature extractor and lead to poor OOD accuracy, but initializing with a linear probed head can fix this—empirically LP-FT gets better accuracies both ID and OOD. + +change less. Since the head and feature extractor are simultaneously optimized during fine-tuning to a configuration that works well on ID training data, the head only accommodates the distorted features of ID points and performs poorly (relative to linear probing) on the less changed features of OOD points. Interestingly, we show that this feature distortion issue cannot be simply fixed by early stopping—throughout the entire process of fine-tuning, we never pass through parameters that do well OOD (relative to linear probing). On the other hand, given "good" features, linear probing extrapolates better OOD because it preserves pretrained features, but does worse than fine-tuning ID because linear probing cannot adapt the features to the downstream task. + +Technical challenges. Existing theoretical work on transfer learning focuses on linear probing (Wu et al., 2020; Tripuraneni et al., 2020; Du et al., 2020). In contrast, analyses of fine-tuning is scarce and challenging because it requires understanding the training dynamics, instead of only the loss function and its global minimizers. In fact, fine-tuning and training from scratch optimize the same training loss and only differ in their initializations (pretrained vs random). A mathematical analysis that distinguishes them needs to capture properties of the different minima that these algorithms converge to, a phenomenon that is sometimes theoretically referred to as the implicit regularization effect of initialization (Neyshabur et al., 2014). Accordingly, our analysis reasons about the parameters that gradient methods pass through starting from the pretrained initialization, which is challenging because this is a non-convex optimization problem and there is no known closed form for this trajectory. Two-layer linear networks are widely studied in the literature on implicit regularization (Saxe et al., 2014; Gunasekar et al., 2017; Gidel et al., 2019; Arora et al., 2018). However, they analyze random and often small initializations, which don't capture pretraining. + +Algorithmic implications. Our theory shows that fine-tuning underperforms because when trying to fit ID training data with a randomly initialized head, the feature extractor changes significantly for ID examples, making features for ID and OOD examples largely inconsistent. This can be fixed by initializing with a good head that does not need to be updated much during fine-tuning, reducing how much the feature extractor changes. This suggests a simple two-step strategy of first linear probing to find a good head and then full fine-tuning (LP-FT). Empirically, LP-FT outperforms fine-tuning and linear probing, both ID and OOD. Even on CIFAR-10.1 (small distribution shift), where fine-tuning is better for both ID and OOD, we find LP-FT outperforms fine-tuning on both metrics. LP-FT and vanilla fine-tuning use similar amounts of compute because the first step of linear probing is relatively very cheap. Prior work has used LP-FT (Levine et al., 2016; Kanavati & Tsuneki, 2021) (or variants such as layerwise fine-tuning (Howard & Ruder, 2018) or larger learning rates for the head layer (Prabhu et al., 2021))—however it has not been used for robustness / OOD accuracy, and we show that it addresses the ID-OOD tradeoff theoretically and empirically. Note that LP-FT is not meant to be a SOTA method but rather a simple, principled way to get good ID and OOD accuracy—we hope our analysis inspires even better methods for robust fine-tuning. + +Empirical validation. Finally, we find that fine-tuning fails and LP-FT works, for the reasons predicted by our feature distortion theory: (1) fine-tuning changes the features for ID examples more than for OOD examples, leading to distortions; (2) LP-FT indeed changes both ID and OOD features $10 - 100 \times$ less than fine-tuning does; (3) LP-FT gets the best of both worlds, achieving better accuracies than fine-tuning and linear probing both ID and OOD (Figure 1). + +# 2 SETUP + +Task and evaluation. Given training examples sampled from some distribution $P_{\mathrm{id}}$ , our goal is to learn a predictor $f: \mathbb{R}^d \to \mathcal{Y}$ to map inputs $x \in \mathbb{R}^d$ to outputs $y \in \mathcal{Y}$ . We evaluate predictors on their standard "in-distribution" (ID) performance $L_{\mathrm{id}}$ on new test samples drawn from $P_{\mathrm{id}}$ that the training data is also sampled from. We also evaluate classifiers on their "out-of-distribution" (OOD) performance $L_{\mathrm{ood}}$ on test samples drawn from a new distribution $P_{\mathrm{ood}}$ that is different from $P_{\mathrm{id}}$ . Formally, for some loss function $\ell$ , we evaluate classifiers on: + +$$ +L _ {\mathrm {i d}} (f) = \underset {(x, y) \sim P _ {\mathrm {i d}}} {\mathbb {E}} [ \ell (f (x), y) ] \text {a n d} L _ {\mathrm {o o d}} (f) = \underset {(x, y) \sim P _ {\mathrm {o o d}}} {\mathbb {E}} [ \ell (f (x), y) ]. \tag {2.1} +$$ + +Models. In this work, we focus on predictors that leverage pretrained representations. We parameterize the final predictor $f$ as follows: given features $g_{B}(x) \in \mathbb{R}^{k}$ for some feature extractor parameters $B \in \mathcal{B}$ , and a linear "head" $v \in \mathcal{V}$ , we have $f_{v,B}(x) = v^{\top}g_{B}(x)$ . In our experiments (Section 4), $g_{B}$ is a deep network and in our theory (Section 3), $g_{B}$ is a linear projection. + +We assume access to some initial pretrained feature extractor $B_{0}$ that is obtained by training on potentially large amounts of data from a distribution that contains unlabeled or weakly supervised $x$ inputs from $P_{\mathrm{id}}$ and $P_{\mathrm{odd}}$ . We focus on two popular methods to learn a predictor $f_{v,B}$ given training data from $P_{\mathrm{id}}$ : (i) linear probing where $B = B_{0}$ and the linear head is obtained by minimizing some loss (e.g., logistic loss for classification, squared loss for regression) on the training data, and (ii) fine-tuning where both $v$ and $B$ are updated by performing gradient descent on some loss on the training data with $B$ initialized at $B_{0}$ . + +# 3 THEORY: FINE-TUNING DISTORTS PRETRAINED FEATURES + +Our goal is to understand under what conditions fine-tuning does worse than linear probing out-of-distribution (OOD). We consider a linear setting (feature extractor $g_{B}$ is linear) where the pretrained features are "good" and the OOD shift is large (Section 3.1). We prove our main result: that fine-tuning, in which all model parameters are updated, distorts features and gets suboptimal OOD error (Section 3.2, Theorem 3.2). We use this result to show that linear probing gets better OOD error but worse ID error than fine-tuning (Section 3.3). Finally, we explain why linear probing then fine-tuning can mitigate this ID-OOD tradeoff (Section 3.4). + +Our analysis handles two key challenges which distinguishes it from prior work on transfer learning in linear models (Wu et al., 2020; Tripuraneni et al., 2020; Du et al., 2020; Xie et al., 2021a). Prior work focuses on linear probing, while we study fine-tuning where the resulting optimization problem is non-convex. We also study overparameterized models where the training loss alone does not determine test performance—this captures the fact that both training neural networks from scratch and fine-tuning them have the same training loss but very different test performance. However, it also makes the analysis challenging because we need to reason about the trajectory of gradient methods starting from a pretrained initialization, which has no known closed form. + +# 3.1 LINEAR OVERPARAMERIZED SETTING + +For our analysis, we focus on regression, where $\mathcal{V} = \mathbb{R}$ and $\ell (\widehat{y},y) = (\widehat{y} -y)^2$ is the squared loss. + +Models. Recall from Section 2 that we parameterize predictors in terms of the feature extractor and head parameters. In this section, we study models where the feature extractor is linear, i.e. $f_{v,B}(x) = v^\top Bx$ where $B \in \mathcal{B} = \mathbb{R}^{k \times d}$ , and $v \in \mathcal{V} = \mathbb{R}^k$ . + +Good pretrained features. For simplicity, we assume the models are well-specified i.e. $y = v_{\star}^{\top}B_{\star}x$ where $v_{\star}\in \mathbb{R}^{k}$ and $B_{\star}\in \mathbb{R}^{k\times d}$ . Note that $B_{\star}$ and $v_{\star}$ are only unique up to rotations, i.e., for any rotation matrix $U$ , $(Uv_{\star})^{T}(UB_{\star})x = v_{\star}^{T}B_{\star}x$ . As in prior work (Tripuraneni et al., 2020) suppose $B_{\star}$ + +and $B_0$ have been orthogonalized to have orthonormal rows. Suppose we have a pretrained feature extractor $B_0$ close to $B_{\star}$ , so $d(B_0,B_\star)\leq \epsilon$ where the distance $d$ is defined as (where the min is over rotation matrices $U\in \mathbb{R}^{k\times k}$ ): + +$$ +d \left(B, B ^ {\prime}\right) = \min _ {U} \| B - U B ^ {\prime} \| _ {2}. \tag {3.1} +$$ + +Training data. Let $X \in \mathbb{R}^{n \times d}, X \neq 0$ be a matrix encoding $n$ training examples from $P_{\mathrm{id}}$ where each of the $n$ rows is a training input. Let $Y \in \mathbb{R}^n$ be the corresponding outputs. Let $S = \operatorname{rowspace}(X)$ be the $m$ -dimensional subspace spanning the training examples. We consider an overparameterized setting where $1 \leq m < d - k$ . Intuitively, the input dimension $d$ is high (e.g., 10K), feature dimension $k$ is lower (e.g., 100) and $m$ is in the middle (e.g., 5K). + +Large OOD shift. We assume that the OOD data contains examples outside the span of the training data. Formally, let $P_{\mathrm{odd}}$ have second moment $\Sigma = \mathbb{E}[xx^{\top}]$ where $x \sim P_{\mathrm{odd}}$ , for invertible $\Sigma$ . + +Training methods. Given training data and a pretrained feature extractor $B_{0}$ , we study the two popular methods of linear probing (LP) and fine-tuning (FT) to learn the final predictor. Both methods involve optimizing the training loss via gradient descent (or variants). In order to effectively analyze these gradient based algorithms, we study vanishing step sizes leading to gradient flows. Gradient flows can be thought of as a continuous time analogue of gradient based methods and have been extensively studied in recent years as a way to understand gradient based methods (Gunasekar et al., 2017; Arora et al., 2018; Du et al., 2018). Formally, for training loss $\widehat{L}(v,B) = \|XB^{\top}v - Y\|_2^2$ , the gradient flow differential equations for LP and FT are as follows: + +$$ +\partial_ {t} v _ {\mathrm {f t}} (t) = - \nabla_ {v} \widehat {L} \left(v _ {\mathrm {f t}} (t), B _ {\mathrm {f t}} (t)\right), \partial_ {t} B _ {\mathrm {f t}} (t) = - \nabla_ {B} \widehat {L} \left(v _ {\mathrm {f t}} (t), B _ {\mathrm {f t}} (t)\right), \tag {3.2} +$$ + +$$ +\partial_ {t} v _ {\mathrm {l p}} (t) = - \nabla_ {v} \widehat {L} \left(v _ {\mathrm {l p}} (t), B _ {0}\right), \partial_ {t} B _ {\mathrm {l p}} (t) = 0, \tag {3.3} +$$ + +initialized with $B_{\mathrm{ft}}(0) = B_{\mathrm{lp}}(0) = B_0$ and $v_{\mathrm{ft}}(0) = v_{\mathrm{lp}}(0) = v_0$ . In practice, the head parameter $v_0$ is initialized randomly—our results hold for any standard random initialization (Glorot & Bengio, 2010), for example $v_0 \sim \mathcal{N}(0,\sigma^2 I)$ for any $\sigma^2$ , or zero initialization where $v_0 = 0$ . Recall that the initial value of the feature extractor $B_0$ is obtained via pretraining. + +The final LP and FT solutions are the limit points of the corresponding gradient flows: + +$$ +v _ {\mathrm {f t}} ^ {\infty} = \lim _ {t \rightarrow \infty} v _ {\mathrm {f t}} (t) \text {a n d} B _ {\mathrm {f t}} ^ {\infty} = \lim _ {t \rightarrow \infty} B _ {\mathrm {f t}} (t), \tag {3.4} +$$ + +$$ +v _ {\mid p} ^ {\infty} = \lim _ {t \rightarrow \infty} v _ {\mid p} (t) \text {a n d} B _ {\mid p} ^ {\infty} = \lim _ {t \rightarrow \infty} B _ {\mid p} (t) = B _ {0}. \tag {3.5} +$$ + +# 3.2 FINE-TUNING DISTORTS PRETRAINED FEATURES + +The more common method of using a pretrained feature extractor is fine-tuning (FT) which typically improves ID performance relative to linear probing (LP). In this section, we show that FT can distort features leading to poor OOD performance. We first explain the key intuitions and then present our formal theorem lower bounding the OOD error of FT (Section 3.2.2). + +# 3.2.1 KEY INTUITIONS + +We use two main observations to characterize when and why FT has higher OOD error than LP. + +1. Features get distorted: representations change only in the ID subspace (i.e., subspace spanned by the training data) and are unchanged in the orthogonal subspace. To see this, we take the derivative of the training loss $\widehat{L}(v,B) = \|XB^\top v - Y\|_2^2$ with respect to the feature extractor parameter $B$ : + +$$ +\nabla_ {B} \widehat {L} (v, B) = 2 v \left(Y - X B ^ {\top} v\right) ^ {\top} X. \tag {3.6} +$$ + +By definition, if $u$ is a direction orthogonal to the training subspace $S = \operatorname{rowspace}(X)$ , then $\nabla_B \widehat{L}(v, B)u = 0$ , that is the gradient updates to $B$ do not modify $Bu$ for $u \in S^\perp$ . However, the gradient is non-zero for directions $u$ in the ID subspace and the corresponding features $Bu$ change across the fine-tuning process. We call this feature distortion: the features in some directions are changed but not others. Next, we explain why this can lead to high OOD error. + +2. Distorted features can lead to higher OOD error. Consider a toy example (Figure 2) where $d = 2$ and the dimensionality of the representations $k = 1$ . The linear head $v$ is a scalar quantity that denotes how much the feature extractor $B$ has to be scaled by. Suppose the ID-subspace is the $x$ -axis. There are different ways of fitting the ID subspace depending on the feature extractors $B$ as + +![](images/c99eedb717512221d6cea67ee79de09a7f6781d7e3efaba133fe60294279feb0.jpg) +(a) Toy example (Linear probing) + +![](images/56535ebbd05e5bd369a3ce468a1aa235769f097f727a9d5c614644f3b158feb6.jpg) +(b) Toy example (fine-tuning) +Figure 2: A toy version of our theory illustrating why fine-tuning distorts features, with inputs in 2D. Given input $x$ , the ground truth output is $y = w_{\star}^{\top}x$ . The ID data is along the $x$ -axis and the pretrained feature extractor is $B_0$ . (a) Linear probing learns $w_{\mathrm{lp}}$ , a scaling of the pretrained feature extractor that gets the ID data correct ( $w_{\mathrm{lp}}$ and $w_{\star}$ have the same $x$ coordinate as indicated by the vertical dotted line). (b) Fine-tuning updates the pretrained feature extractor along the ID data (so horizontally) to get $B_{\mathrm{ft}}$ , and then learns a scaling of these features that gets the ID data correct. While both methods get ID data correct, fine-tuning makes large errors perpendicular to the ID data, because fine-tuning updates $B_0$ along the ID direction but not the perpendicular direction. + +shown in the Figure—both fine-tuned and linear probed estimators match the true parameter in the ID subspace (since $w_{\mathrm{lp}}, w_{\mathrm{ft}}, w_{\star}$ have the same projection on the $x$ -axis). If the feature extractor were optimal or scaled versions of the optimal, good performance on the ID subspace would translate to good performance everywhere, even in directions orthogonal to the ID subspace. However, in FT, the features change only for inputs in the ID subspace (see (1)) and thus the updated features are not simply scaled but distorted. In Figure 2, this corresponds to the feature extractor $B_0$ changing along the $x$ -axis. In this case even if the ID error is low, error in directions orthogonal to the ID subspace can be high, leading to high OOD error. + +The only way the pretrained features are not distorted and only scaled during FT is if the initial feature extractor $B_{0}$ is exactly aligned with the ID subspace. In Figure 2, if $B_{0}$ is along the $x$ -axis (the ID subspace), then updating the features exclusively along the $x$ -axis would simply scale the initial features. In this case linear probing and fine-tuning will have identical behavior. However, if the angle between $B_{0}$ and the $x$ -axis is non-zero, the updates would lead to distortions. In high dimensions, we measure the alignment between $B_{0}$ and the ID subspace with the largest principal angle: + +Definition 3.1 (largest principal angle). Let $A$ and $B$ be arbitrary subspaces, and $E$ and $F$ be matrices with orthonormal columns that span $A$ and $B$ respectively, with $r = \min(\dim(A), \dim(B))$ . Then $\cos\theta_{\max}(A, B) = \sigma_r(E^\top F)$ , which is the $r$ -th largest singular value of $E^\top F$ . + +# 3.2.2 GENERAL RESULT ON THE OOD ERROR OF FINE-TUNING + +Our main theorem lower bounds the OOD error of fine-tuning outside the span of the training data. + +Theorem 3.2. In the overparameterized linear setting, let $S^{\perp} = \text{rowspace}(X)^{\perp}$ , $R_0 = \text{rowspace}(B_0)$ , and $v_{\star}, B_{\star}$ be the optimal parameters with $w_{\star} = B_{\star}v_{\star}$ . If $\cos \theta_{\max}(R_0,S^{\perp}) > 0$ , then for all time steps $t$ , the OOD error of the fine-tuning iterates $(B_{\mathrm{ft}}(t),v_{\mathrm{ft}}(t))$ is lower bounded: + +$$ +\sqrt {L _ {\mathrm {o o d}} \left(v _ {\mathrm {f t}} (t) , B _ {\mathrm {f t}} (t)\right)} \geq \sqrt {\sigma_ {\min } (\Sigma)} \left(\frac {\cos \theta_ {\max } \left(R _ {0} , S ^ {\perp}\right)}{\sqrt {k}} \frac {\min \left(\varphi , \varphi^ {2} / \| w _ {\star} \| _ {2}\right)}{\left(1 + \| w _ {\star} \| _ {2}\right) ^ {2}} - \epsilon\right), \tag {3.7} +$$ + +where $\varphi^2 = |(v_0^\top v_\star)^2 - (v_\star^\top v_\star)^2|$ is defined to be initial head alignment error and $\epsilon \geq d(B_0, B_\star)$ is the error in the pretrained feature extractor. + +Proof sketch. Since the features do not change for examples in $S^{\perp}$ (perpendicular to the training data), we show that in order to achieve low error on $S^{\perp}$ the linear head $v_{\mathrm{ft}}(t)$ would have to become very similar to the optimal $v_{\star}$ at some time $t$ . The head initialization $v_{0}$ is random (or zero) and likely to be far from $v_{\star}$ (measured by the alignment error $\varphi$ ), so the head would have to change a lot to get close to $v_{\star}$ . As we see from the fine-tuning gradient flow (3.2), $v_{\mathrm{ft}}(t)$ and $B_{\mathrm{ft}}(t)$ change in a "coupled" manner, and a "balancedness" invariant in Du et al. (2018) holds across the fine-tuning trajectory. Correspondingly, if $v_{\mathrm{ft}}(t)$ changes a lot and gets close to $v_{\star}$ , the features $B_{\mathrm{ft}}(t)$ also change a lot for + +examples in $S$ — we show that this would lead to high error on examples in $S$ . Either way, fine-tuning would get some subspace ( $S$ or $S^{\perp}$ ) of examples wrong, leading to high OOD error. The full proof appears in Appendix A. + +Interpretations of various quantities. Quality of pretrained features $(\epsilon)$ . To unpack the bound consider a special case where the pretrained features are perfect $(\epsilon = 0)$ . With perfect features, Proposition A.21 shows that linear probing gets zero OOD error. Theorem 3.2 shows that $L_{\mathrm{odd}}(v_{\mathrm{ft}}(t),B_{\mathrm{ft}}(t)) > 0$ at all times $t$ so fine-tuning underperforms when the features are perfect. + +Alignment error of random head initialization $(\varphi^2)$ . The lower bound (Equation A.14) increases as $\varphi^2$ increases, because the gradient updates to the head and feature extractor are coupled. If the head were somehow initialized perfectly at $v_{\star}$ , fine-tuning updates may not increase the OOD error. However, when the head is randomly initialized as is standard in fine-tuning, the alignment error is high, leading to high OOD error. We use this insight in Section 3.4 to show that better head initialization (via linear probing) improves OOD performance of fine-tuning. + +# 3.3 LINEAR PROBING VS. FINE-TUNING + +In this section, we use our main theorem on fine-tuning (Theorem 3.2) and adapt prior work on linear probing to show that linear probing is better than fine-tuning OOD, but worse ID, when the ID distribution has density on a lower $m < d$ dimensional subspace $S$ , and $B_0$ is close to $B_{\star}$ . + +Assumption 3.3 (ID subspace assumption). We assume that the ID data lies on an $m$ -dimensional subspace $S$ where $k < m < d - k$ , and we have $n \geq m$ training examples. Formally, let $P_z$ be a distribution on $\mathbb{R}^m$ which has density, and let the columns of $F \in \mathbb{R}^{d \times m}$ form an orthonormal basis for $S$ . Then $P_{\mathrm{id}}$ has the distribution of $Fz$ where $z \sim P_z$ . + +Recall that the ID error is the expected mean-squared error over the ID distribution $P_{\mathrm{id}}$ : + +$$ +L _ {\mathrm {i d}} (v, B) = \underset {x \sim P _ {\mathrm {i d}}} {\mathbb {E}} \left[ \left(v _ {\star} ^ {\top} B _ {\star} x - v ^ {\top} B x\right) ^ {2} \right] \tag {3.8} +$$ + +OD comparison: Under mild non-degeneracy conditions, we show that as the feature extractor error $\epsilon$ goes to 0, linear probing does much better than fine-tuning OOD: the ratio of the losses goes to 0. The non-degeneracy conditions are similar to Section 3.2—we require that the training data cannot be exactly in the same direction or orthogonal to the pretrained features, formally that $\cos\theta_{\max}(R_*, S)$ and $\cos\theta_{\max}(R_*, S^{\perp})$ are not 0 where $R_* = \mathrm{rowspace}(B_\star)$ . + +Theorem 3.4 (Informal version of Theorem A.9). In the linear overparameterized setting, under the ID subspace assumption (Assumption 3.3), if $\cos \theta_{\max}(R_*, S) \neq 0$ and $\cos \theta_{\max}(R_*, S^{\perp}) \neq 0$ where $R_* = \text{rowspace}(B_*)$ , then, + +$$ +\frac {L _ {\mathrm {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty} , B _ {0}\right)}{L _ {\mathrm {o o d}} \left(v _ {\mathrm {f t}} (t) , B _ {\mathrm {f t}} (t)\right)} \xrightarrow {p} 0, a s \quad B _ {0} \rightarrow B _ {\star}. \tag {3.9} +$$ + +This holds for all times $t$ for $FT$ (and therefore also for the limit $v_{\mathrm{ft}}^{\infty},B_{\mathrm{ft}}^{\infty}$ ) and the $LP$ iterates converge to $v_{\mathrm{lp}}^{\infty},B_{0}$ as a result of the gradient flow on a convex problem. + +Intuitively, if the pretrained features are good, LP learns a near optimal linear head which has small OOD error (Lemma A.15) but fine-tuning has high OOD error (Theorem 3.2). We give a more formal version of Theorem 3.4 and a proof in Appendix A.3. + +ID comparison: When the pretrained features have some error, we show that fine-tuning does better than linear probing ID because fine-tuning can update the features to fit the ID data. The non-degeneracy condition on $R_{\mathrm{aug}}$ below is similar to our previous results, and holds with probability 1 if the ID subspace is chosen randomly, from Lemma A.17. + +Proposition 3.5. In the linear overparameterized setting, under the ID subspace assumption (Assumption 3.3), let $R_0 = \text{rowspace}(B_0)$ , and $R_{\text{aug}} = \text{Span}(\{w_\star\} \cup R_0)$ . Suppose $w_\star \notin R_0$ , $\cos \theta_{\text{max}}(S, R_{\text{aug}}) \neq 0$ , and that fine-tuning converges to a local minimum of its loss, then fine-tuning does better if $L_{\text{id}}(v_{\text{ft}}^\infty, B_{\text{ft}}^\infty) < L_{\text{id}}(v_{\text{lp}}^\infty, B_0)$ with probability 1 (over the randomness of the training examples). + +To summarize, we proved that there are tradeoffs between ID and OOD error: FT has lower ID error but higher OOD error than LP. In the next section, we extend our theoretical insights to illustrate why a simple variant of FT may mitigate such tradeoffs. + +# 3.4 LINEAR PROBING THEN FINE-TUNING: A SIMPLE VARIANT TO MITIGATE TRADEOFFS + +The advantage of fine-tuning is it can adapt the feature extractor to fit the downstream task. Can we keep this benefit while ensuring that our OOD error is low when we have good pretrained features? + +Going back to Theorem 3.2, we see that the alignment error in the head initialization $\varphi^2 = |(v_0^\top v_\star)^2 - (v_\star^\top v_\star)^2|$ plays an important role. The issue with FT was that under random or zero initialization, $\varphi^2$ is usually large and since the gradient updates to the feature extractor parameter are coupled with that of the head parameter, the features get distorted in a manner that increases the OOD error. This suggests that we should use a better head initialization—one obtained from linear probing. If the pretrained features are decent, a linear probed head would be much better aligned with $v_\star$ allowing the features to be updated in a manner that does not increase the OOD error much. + +We formally prove this intuition in a simple setting where we have perfect pretrained features. Note that in this case, linear probing alone gets zero OOD error—so Proposition 3.6 is just a first cut result to illustrate that if initialized well, full fine-tuning does not distort features. + +Proposition 3.6. Given perfect pretrained features $B_0 = UB_\star$ for some rotation $U$ . Let $R_0 = \text{rowspace}(B_0)$ . Under the non-degeneracy conditions $\cos\theta_{\max}(R_0, S) \neq 0, \cos\theta_{\max}(R_0, S^\perp) \neq 0$ : + +$$ +\forall t, L _ {\text {o o d}} \left(B _ {\mathrm {f t}} (t) ^ {\top} v _ {\mathrm {f t}} (t)\right) > 0, \text {i f} v _ {0} \sim \mathcal {N} \left(0, \sigma^ {2} I\right) \text {i s r a n d o m l y i n i t i a l i z e d} (F T), \tag {3.10} +$$ + +$$ +\forall t, L _ {\text {o o d}} \left(B _ {\mathrm {f t}} (t) ^ {\top} v _ {\mathrm {f t}} (t)\right) = 0, \text {i f} v _ {0} \text {i s i n i t i a l i z e d t o} v _ {\mid p} ^ {\infty} (L P - F T). \tag {3.11} +$$ + +# 4 EXPERIMENTS + +We run experiments on ten benchmark datasets with deep neural networks and see that given good pretrained features, fine-tuning (FT) does better ID but worse OOD than linear probing (LP). As predicted by the theory, we find that LP-FT does better than both methods. Finally, we see that a number of predictions from the feature distortion theory hold up in practice. For more details on datasets, pretraining models, and experiment protocols, see Appendix B. + +We use standard distribution shift datasets: DomainNet (Peng et al., 2019; Tan et al., 2020), BREEDS-Living-17 (Santurkar et al., 2020), BREEDS-Entity-30 (Santurkar et al., 2020), CIFAR-10 $\rightarrow$ STL (Krizhevsky, 2009; Coates et al., 2011; French et al., 2018), CIFAR-10 $\rightarrow$ CIFAR-10.1 (Recht et al., 2018), ImageNet-1K (Russakovsky et al., 2015)—where the OOD test sets are ImageNetV2 (Recht et al., 2019), ImageNet-R (Hendrycks et al., 2020), ImageNet-A (Hendrycks et al., 2019b), and ImageNet-Sketch (Wang et al., 2019)—, and FMoW Geo-shift which is adapted from the satellite remote sensing dataset Functional Map of the World (Christie et al., 2018; Koh et al., 2021). See Appendix B for more details on the datasets. + +Pretraining and models. We use a CLIP pretrained ViT-B/16 for ImageNet. For the other datasets we use a ResNet-50 architecture and consider a diverse range of pretraining methods and datasets: MoCo-v2 (Chen et al., 2020b), CLIP (Radford et al., 2021), and MoCo-TP (Ayush et al., 2020). In Appendix B, we also show results for a CLIP-ViT-B/16 and more fine-tuning baselines on Living-17. + +# 4.1 LINEAR PROBING VS FINE-TUNING + +Experiment protocols. We initialize with the pretrained model, and fine-tune or linear probe on ID training examples. For fine-tuning on each dataset we swept over 6 learning rates, using a cosine learning rate schedule and batch size of 64. We early stop and choose the best learning rate using ID validation accuracy. For linear probing we train an $\ell_2$ -regularized logistic regression classifier on frozen features from the penultimate layer of the pretrained model, selecting the best $\ell_2$ -regularization hyperparameter based on ID validation accuracy. For all methods, we run each hyperparameter configuration 3 times (with different random seeds), and take the average accuracy. We used a slightly different protocol for ImageNet because the dataset is much larger and running these experiments involves more computational resources: we used a batch size of 128, swept over 3 learning rates for both fine-tuning and linear probing (we did not sweep over $\ell_2$ -regularization), and ran each hyperparameter configuration once. In all cases, OOD data was only used for evaluation. + +Results. Fine-tuning (FT) does better than linear probing (LP) on 5 out of 6 ID datasets (average accuracy of $85.1\%$ for FT vs. $82.9\%$ for LP, see Table 1). This is consistent with prior work and intuitions. However, linear probing does better on 8 out of 10 OOD datasets (average accuracy of $66.2\%$ for LP vs. $59.3\%$ for FT, see Table 2)—LP does better on all datasets except CIFAR-10.1 and ImageNetV2, where the OOD is designed to closely replicate the ID dataset. This matches + +Table 1: ID accuracies with $90\%$ confidence intervals over 3 runs—fine-tuning does better than linear probing on all datasets except DomainNet (which could be because the version of the DomainNet training dataset from Tan et al. (2020) is fairly small, with around 20K examples). LP-FT does the best on all except FMoW where it is in between linear probing and fine-tuning. + +
CIFAR-10Ent-30Liv-17DomainNetFMoWImageNetAverage
FT97.3 (0.2)93.6 (0.2)97.1 (0.2)84.5 (0.6)56.5 (0.3)81.7 (-)85.1
LP91.8 (0.0)90.6 (0.2)96.5 (0.2)89.4 (0.1)49.1 (0.0)79.7 (-)82.9
LP-FT97.5 (0.1)93.7 (0.1)97.8 (0.2)91.6 (0.0)51.8 (0.2)81.7 (-)85.7
+ +Table 2: OOD accuracies with $90\%$ confidence intervals over 3 runs. Linear probing does better than fine-tuning on all datasets except CIFAR-10.1 and ImageNetV2, where the ID and OOD are similar (consistent with our theory). LP-FT does the best on all 10 datasets. + +
STLCIFAR-10.1Ent-30Liv-17DomainNetFMoW
FT82.4 (0.4)92.3 (0.4)60.7 (0.2)77.8 (0.7)55.5 (2.2)32.0 (3.5)
LP85.1 (0.2)82.7 (0.2)63.2 (1.3)82.2 (0.2)79.7 (0.6)36.6 (0.0)
LP-FT90.7 (0.3)93.5 (0.1)62.3 (0.9)82.6 (0.3)80.7 (0.9)36.8 (1.3)
ImNetV2ImNet-RImNet-SkImNet-AAverage
FT71.5 (-)52.4 (-)40.5 (-)27.8 (-)59.3
LP69.7 (-)70.6 (-)46.4 (-)45.7 (-)66.2
LP-FT71.6 (-)72.9 (-)48.4 (-)49.1 (-)68.9
+ +our theoretical predictions. Our training datasets vary in size from 20K examples to over a million examples, so LP does not appear to perform better than FT simply because of a small training set. + +# 4.2 LINEAR PROBING THEN FINE-TUNING (LP-FT) + +Experiment protocols. For LP-FT, we initialize the neural network head using the linear probed solution, and then fine-tune the model. LP-FT and fine-tuning use similar compute because the linear probing step is much faster than fine-tuning. As with fine-tuning, we swept over 6 learning rates, early stopping using ID validation accuracy. For the ImageNet experiments we swept over 3 learning rates, and explicitly ensured that LP-FT and fine-tuning use exactly the same compute (we ran each stage of LP-FT for half as many epochs as we ran vanilla fine-tuning). + +Results. We find that LP-FT gets the best accuracy ID (average: $85.7\%$ ) and OOD (average: $68.9\%$ ). This is true for 5/6 ID and 10/10 OOD datasets—every dataset except FMoW ID, where LP-FT is better than linear probing but worse than fine-tuning. Since the ID accuracy on FMoW is low $(56.5\%)$ , this could be because the pretrained features are not good. + +# 4.3 EXAMINING THE FEATURE DISTORTION THEORY + +Early stopping does not mitigate feature distortion. Our theory predicts that fine-tuning can do worse OOD (than linear probing) throughout the process of fine-tuning, and not just at the end. To test this, we early stop each fine-tuning method and choose the best learning rate based on OOD test accuracy. As expected, fine-tuning does improve a little, but linear probing (average accuracy: $67.1\%$ is still better than fine-tuning (average accuracy: $61.3\%$ ). See Appendix B for per-dataset results. + +ID-OD features get distorted from fine-tuning. The feature distortion theory predicts that fine-tuning changes features for ID examples more than for OOD examples, which is why fitting a head on ID examples performs poorly OOD. To test this, for each example $x$ in Living-17 (results for other datasets are in Appendix B), we took the Euclidean distance of the ResNet-50 features before and after fine-tuning: $\| g_B(x) - g_{B_0}(x)\| _2$ . As expected, the average distance for ID examples $(0.0188\pm 0.0001)$ is more than for OOD examples $(0.0167\pm 0.0001)$ . The theory also predicts that LP-FT changes features less than fine-tuning does. As expected, the average distance changed by LP-FT both ID $(0.0011\pm 0.0001)$ and OOD $(0.0009\pm 0.0001)$ is $20\times$ smaller than for fine-tuning. + +Pretrained features must be good, ID-OOD far apart. Our theory says that linear probing does better than fine-tuning OOD, but only if the OOD and ID data are quite different, and the pretrained features are good—otherwise fine-tuning can do better OOD by adjusting the feature extractor ID. + +Feature quality: We use a checkpoint of MoCo-v1 that got $10\%$ worse accuracy (on ImageNet) and compare linear probing and fine-tuning on Living-17. With worse features, both methods do worse, but fine-tuning $(96\%$ ID, $71\%$ OOD) does better than linear probing $(92\%$ ID, $66\%$ OOD). + +$ID \approx OOD$ : We fine-tune / linear probe on CIFAR-10, and test on CIFAR-10.1, a dataset collected using a similar protocol to CIFAR-10. As expected, fine-tuning (92.3%) outperforms linear probing OOD (82.7%). Even in this case, where we have no tradeoffs, LP-FT does the best (93.5%). + +# 5 RELATED WORK AND DISCUSSION + +Fine-tuning vs. linear probing. Fine-tuning (FT) and linear probing (LP) are popular transfer learning algorithms. There is substantial evidence of FT outperforming LP in-distribution (ID) including recent large-scale investigations (Kornblith et al., 2019; Chen et al., 2021; Zhai et al., 2020; Chen et al., 2020b) (the only notable exception is in Peters et al. (2019) where LP performs better than FT when using ELMo representations, but worse using BERT). FT is therefore the method of choice for improving accuracy, while LP is used to analyze properties of representations (Peters et al., 2018; Belinkov et al., 2017; Hewitt & Manning, 2019). In our work, we find that FT can underperform LP especially when using high quality pretrained features in the presence of a large distribution shift. There are a variety of other fine-tuning heuristics (Ge & Yu, 2017; Guo et al., 2019; Zhang et al., 2020; Zhu et al., 2020; Jiang et al., 2021; Aghajanyan et al., 2021)—combining our insights with these ideas might lead to better methods. + +The benefit of preserving pretrained features. Our work adds to growing evidence that lightweight fine-tuning, where only a small part of a pretrained model are updated, can perform better under distribution shifts—and we give a theoretical grounding to why this might be the case. Zero-shot language prompting in vision (Radford et al., 2021) and other lightweight fine-tuning approaches in NLP (Houlsby et al., 2019; Li & Liang, 2021; Xie et al., 2021b; Lester et al., 2021; Utama et al., 2021; Zhou et al., 2021) have been shown to improve OOD performance. Andreassen et al. (2021) observe that through the course of fine-tuning, ID accuracy increases but OOD accuracy plateaus. + +Mitigating ID-OOD tradeoffs. While LP-FT has sometimes been used as a fine-tuning heuristic (Levine et al., 2016; Kanavati & Tsuneki, 2021; fastai), it has not been used for robustness / OOD accuracy, and we show that it addresses the ID-OOD tradeoff theoretically and empirically. Tradeoffs between ID and OOD accuracy are widely studied and prior work self-trains on large amounts of unlabeled data to mitigate such tradeoffs (Raghunathan et al., 2020; Xie et al., 2021a; Khani & Liang, 2021). In contrast, LP-FT uses no extra unlabeled data and is a simple variant of fine-tuning. In concurrent and independent work, Wortzman et al. (2021) show that ensembling the weights of a zero-shot and fine-tuned model mitigates the ID-OOD tradeoff between these approaches, and this method could be promising for our datasets as well. + +Theoretical analysis of transfer learning. Prior works on transfer learning mainly analyze linear probing (Wu et al., 2020; Tripuraneni et al., 2020; Du et al., 2020). Recent works (Chua et al., 2021; Shachaf et al., 2021) study fine-tuning, but in the underparameterized regime (where there is a unique global optimum) or assuming a balanced initialization. Prior works also focus on ID error, while we analyze OOD error. See Section C for additional related work on theory of overparameterized models. + +# 6 CONCLUSION. + +There is a strong trend towards leveraging pretrained models to improve downstream performance, and whenever feasible, it is common to fine-tune all model parameters. In this work, we show theoretically and empirically that preserving features might be important for robustness, and simpler approaches like linear probing can improve out-of-distribution (OOD) performance. This OOD gap between fine-tuning and linear probing grows as the quality of pretrained features improve, so we believe our results are likely to gain significance over time with growing innovations and scale of pretraining. + +Finally, we showed LP-FT can mitigate tradeoffs between ID and OOD accuracy in our context. LP-FT could be useful in other situations, for example in CLIP we could initialize the final layer with the zero-shot classifier and then fine-tune the entire model, as done in concurrent work (Wortsman et al., 2021). In NLP, linear probing is not as good—here we could first prompt-tune (Lester et al., 2021) and then fine-tune the entire model. LP-FT is just a first step in leveraging the intuition from our theoretical analysis and we hope that this work inspires new methods of leveraging powerful pretrained models. + +Proofs and Reproducibility: We include proofs for our theoretical results in Appendix A and additional experiment details in Appendix B. Updated code is available at https://github.com/AnanyaKumar/transfer_learning and this CodaLab worksheet. + +Acknowledgements: We would like to thank Kumar Ayush and Burak Uzkent for MoCo checkpoints pretrained on unlabeled FMoW images, Nilesh Tripuraneni for clarifications on his work and references on principal angles, Daniel Levy for useful suggestions on experiments to run, Niladri Chatterji, Jeff Z. HaoChen, and Colin Wei for useful papers and comments on figures, Niladri Chatterji and Kaidi Cao for reviewing the paper at ML paper swap, Kevin Yang for his help with analyzing differential equations, Tri Dao and Pang Wei Koh for help with writing, Suriya Gunasekar, Adam Kalai, Simon Kornblith, Ting Chen, Sang Michael Xie, Albert Gu, and Kendrick Shen for useful discussions, and Pang Wei Koh, Niladri Chatterji, and Tri Dao for suggestions on framing our results better. + +Ananya Kumar was supported by the Rambus Corporation Stanford Graduate Fellowship. Percy Liang was supported by the Open Philanthropy Project and NSF Award Grant No. 1805310. Aditi Raghunathan was supported by a Google PhD Fellowship and Open Philanthropy Project AI Fellowship. Tengyu Ma acknowledges support of a Google Faculty Award, NSF IIS 2045685, the Sloan Fellowship, JD.com, SAIL, and SDSI. + +# REFERENCES + +Armen Aghajanyan, Akshit Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, and Sonal Gupta. Better fine-tuning by reducing representational collapse. In International Conference on Learning Representations (ICLR), 2021. +EA AlBadawy, A Saha, and MA Mazurowski. Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing. Med Phys., 45, 2018. +Anders Andreassen, Yasaman Bahri, Behnam Neyshabur, and Rebecca Roelofs. The evolution of out-of-distribution robustness throughout fine-tuning. arXiv, 2021. +Sanjeev Arora, Nadav Cohen, and Elad Hazan. On the optimization of deep networks: Implicit acceleration by overparameterization. In International Conference on Machine Learning (ICML), pp. 244-253, 2018. +Kumar Ayush, Burak Uzkent, Chenlin Meng, Kumar Tanmay, M. Burke, D. Lobell, and Stefano Ermon. Geography-aware self-supervised learning. arXiv, 2020. +Peter L. Bartlett, Philip M. Long, Gt'abor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. arXiv, 2019. +Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. What do neural machine translation models learn about morphology? In Association for Computational Linguistics (ACL), pp. 861-872, 2017. +Mikhail Belkin, Daniel Hsu, and Ji Xu. Two models of double descent for weak features. arXiv, 2019. +Koby Bibas, Yaniv Fogel, and Meir Feder. A new look at an old problem: A universal learning approach to linear regression. In 2019 IEEE International Symposium on Information Theory (ISIT), pp. 2304-2308, 2019. +Tianle Cai, Ruiqi Gao, J. Lee, and Qi Lei. A theory of label propagation for subpopulation shift. In International Conference on Machine Learning (ICML), 2021. +Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning (ICML), pp. 1597-1607, 2020a. +Xinlei Chen, Haoqi Fan, Ross B. Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv, 2020b. +Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. arXiv preprint arXiv:2104.02057, 2021. +Gordon Christie, Neil Fendley, James Wilson, and Ryan Mukherjee. Functional map of the world. In Computer Vision and Pattern Recognition (CVPR), 2018. +Kurtland Chua, Qi Lei, and Jason D Lee. How fine-tuning allows for effective meta-learning. arXiv preprint arXiv:2105.02221, 2021. +Adam Coates, Andrew Ng, and Honlak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15, pp. 215-223, 2011. +Simon S. Du, Wei Hu, Sham M. Kakade, Jason D. Lee, and Qi Lei. Few-shot learning via learning the representation, provably. arXiv, 2020. +Simon Shaolei Du, Wei Hu, and Jason Lee. Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced. In Advances in Neural Information Processing Systems (NeurIPS), 2018. +fastai. fastai tutorial on transfer learning. https://github.com/fastai/course-v3/ blob/master/nbs/dl1/lesson1-pets.ipynb. + +Geoff French, Michal Mackiewicz, and Mark Fisher. Self-ensembling for visual domain adaptation. In International Conference on Learning Representations, 2018. +Weifeng Ge and Yizhou Yu. Borrowing treasures from the wealthy: Deep transfer learning through selective joint fine-tuning. In Computer Vision and Pattern Recognition (CVPR), 2017. +Gauthier Gidel, Francis R. Bach, and Simon Lacoste-Julien. Implicit regularization of discrete gradient dynamics in deep linear neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2019. +Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In International Conference on Artificial Intelligence and Statistics, 2010. +Gene H. Golub and Charles F. Van Loan. Matrix Computations. The Johns Hopkins University Press, 2013. +Suriya Gunasekar, Blake E Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Implicit regularization in matrix factorization. In Advances in Neural Information Processing Systems (NeurIPS), pp. 6151-6159, 2017. +Yunhui Guo, Honghui Shi, Abhishek Kumar, Kristen Grauman, Tajana Rosing, and Rogerio Feris. Spottune: Transfer learning through adaptive fine-tuning. In Computer Vision and Pattern Recognition (CVPR), 2019. +Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J Tibshirani. Surprises in high-dimensional ridgeless least squares interpolation. arXiv preprint arXiv:1903.08560, 2019. +Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Computer Vision and Pattern Recognition (CVPR), 2020. +Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. In International Conference on Machine Learning (ICML), 2019a. +Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. arXiv preprint arXiv:1907.07174, 2019b. +Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, Dawn Song, Jacob Steinhardt, and Justin Gilmer. The many faces of robustness: A critical analysis of out-of-distribution generalization. arXiv preprint arXiv:2006.16241, 2020. +John Hewitt and Christopher D. Manning. A structural probe for finding syntax in word representations. In Association for Computational Linguistics (ACL), 2019. +Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. arXiv, 2019. +Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. In Association for Computational Linguistics (ACL), 2018. +Neal Jean, Marshall Burke, Michael Xie, W. Matthew Davis, David B. Lobell, and Stefano Ermon. Combining satellite imagery and machine learning to predict poverty. Science, 353, 2016. +Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization. In International Conference on Learning Representations (ICLR), 2021. +Fahdi Kanavati and Masayuki Tsuneki. Partial transfusion: on the expressive influence of trainable batch norm parameters for transfer learning. In Medical Imaging with Deep Learning, 2021. +Fereshte Khani and Percy Liang. Removing spurious features can hurt accuracy and affect groups disproportionately. In ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2021. + +Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, Tony Lee, Etienne David, Ian Stavness, Wei Guo, Berton A. Earnshaw, Imran S. Haque, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. WILDS: A benchmark of in-the-wild distribution shifts. In International Conference on Machine Learning (ICML), 2021. +Simon Kornblith, Jonathon Shlens, and Quoc V. Le. Do better imagenet models transfer better? In Computer Vision and Pattern Recognition (CVPR), 2019. +Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. +Thomas Laurent and James H. von Brecht. Deep linear neural networks with arbitrary loss: All local minima are global. In International Conference on Machine Learning (ICML), 2018. +Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691, 2021. +S. Levine, Chelsea Finn, Trevor Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research (JMLR), 17, 2016. +Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. In Association for Computational Linguistics (ACL), 2021. +Xuhong Li, Yves Grandvalet, and Franck Davoine. Explicit inductive bias for transfer learning with convolutional networks. In International Conference on Machine Learning (ICML), 2018. +Song Mei and Andrea Montanari. The generalization error of random features regression: Precise asymptotics and double descent curve. arXiv preprint arXiv:1908.05355, 2019. +John Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, and Ludwig Schmidt. Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization. In International Conference on Machine Learning (ICML), 2021. +Vidya Muthukumar, Kailas Vodrahalli, Vignesh Subramanian, and Anant Sahai. Harmless interpolation of noisy data in regression. IEEE Journal on Selected Areas in Information Theory, 1(1): 67-83, 2020. +Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. arXiv, 2014. +Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In International Conference on Computer Vision (ICCV), 2019. +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In North American Association for Computational Linguistics (NAACL), 2018. +Matthew E Peters, Sebastian Ruder, and Noah A Smith. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pp. 7-14, 2019. +Viraj Prabhu, Shivam Khare, Deeksha Karthik, and Judy Hoffman. Selective entropy optimization via committee consistency for unsupervised domain adaptation. In International Conference on Computer Vision (ICCV), 2021. +Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML), volume 139, pp. 8748-8763, 2021. + +Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John C. Duchi, and Percy Liang. Understanding and mitigating the tradeoff between robustness and accuracy. In International Conference on Machine Learning (ICML), 2020. +Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do CIFAR-10 classifiers generalize to CIFAR-10? arXiv, 2018. +Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. DoImagenet classifiers generalize toImagenet? In International Conference on Machine Learning (ICML), 2019. +Mark Rudelson and Roman Vershynin. Smallest singular value of a random rectangular matrix. Communications on Pure and Applied Mathematics, 62:1707-1739, 2009. +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211-252, 2015. +Shibani Santurkar, Dimitris Tsipras, and Aleksander Madry. Breeds: Benchmarks for subpopulation shift. arXiv, 2020. +Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv, 2014. +Gal Shachaf, Alon Brutzkus, and Amir Globerson. A theoretical analysis of fine-tuning with linear teachers. In Advances in Neural Information Processing Systems (NeurIPS), 2021. +Shuhan Tan, Xingchao Peng, and Kate Saenko. Class-imbalanced domain adaptation: An empirical odyssey. arXiv preprint arXiv:1910.10320, 2020. +Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt. Measuring robustness to natural distribution shifts in image classification. arXiv preprint arXiv:2007.00644, 2020. +Nilesh Tripuraneni, Michael I. Jordan, and Chi Jin. On the theory of transfer learning: The importance of task diversity. arXiv, 2020. +Joel A. Tropp. An introduction to matrix concentration inequalities. Foundations and Trends in Machine Learning, 8:1-230, 2015. +Prasetya Ajie Utama, Nafise Sadat Moosavi, Victor Sanh, and Iryna Gurevych. Avoiding inference heuristics in few-shot prompt-based finetuning. arXiv preprint arXiv:2109.04144, 2021. +Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P Xing. Learning robust global representations by penalizing local predictive power. In Advances in Neural Information Processing Systems (NeurIPS), 2019. +Mitchell Wortsman, Gabriel Ilharco, Mike Li, Jong Wook Kim, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, and Ludwig Schmidt. Robust fine-tuning of zero-shot models. arXiv preprint arXiv:2109.01903, 2021. +Sen Wu, Hongyang R. Zhang, and Christopher Ré. Understanding and improving information transfer in multi-task learning. In International Conference on Learning Representations (ICLR), 2020. +Sang Michael Xie, Ananya Kumar, Robbie Jones, Fereshte Khani, Tengyu Ma, and Percy Liang. In-N-out: Pre-training and self-training using auxiliary information for out-of-distribution robustness. In International Conference on Learning Representations (ICLR), 2021a. +Sang Michael Xie, Tengyu Ma, and Percy Liang. Composed fine-tuning: Freezing pre-trained denoising autoencoders for improved generalization. In International Conference on Machine Learning (ICML), 2021b. +Fisher Yu, Haofeng Chen, Xin Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Computer Vision and Pattern Recognition (CVPR), 2020. + +Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, Lucas Beyer, Olivier Bachem, Michael Tschannen, Marcin Michalski, Olivier Bousquet, Sylvain Gelly, and Neil Houlsby. A large-scale study of representation learning with the visual task adaptation benchmark. arXiv, 2020. +Jeffrey O Zhang, Alexander Sax, Amir Zamir, Leonidas Guibas, and Jitendra Malik. Side-tuning: A baseline for network adaptation via additive side networks. In European Conference on Computer Vision (ECCV), 2020. +Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. arXiv preprint arXiv:2109.01134, 2021. +Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. FreeLB: Enhanced adversarial training for natural language understanding. In International Conference on Learning Representations (ICLR), 2020. + +# A PROOFS FOR SECTION 3 + +# A.1 PRELIMINARIES ON IMPORTANT NOTATIONS AND PRINCIPAL ANGLES + +Big-Oh Notation: For convenience, we use big-oh notation in a way that differs from standard theoretical computer science texts. When we say $O(<\mathrm{expr1}>)$ we mean that this can be replaced by $c < \mathrm{expr1} >$ for some universal constant such that the statement holds. As an example, we can say $5x^{2} \leq O(x^{2})$ because there exists some universal constant $(c = 5)$ such that $5x^{2} \leq 5x^{2}$ . More examples: we can also say $5x^{2} \geq O(x^{2})$ or if $x \geq 1$ then $7x^{2} \leq O(x^{3})$ and $0.1x^{2} \geq O(x)$ . + +Singular Values: Given a rectangular matrix $A \in \mathbb{R}^{m \times n}$ , let $r = \min(m, n)$ . The minimum singular value is defined as the $r$ -th largest singular value of $A$ , so $\sigma_{\min}(A) = \sigma_r(A)$ . + +Working with minimum singular values requires more care than maximum singular vectors. In particular, when we have rectangular matrices some bounds depend on whether the matrix is 'fat' (has more columns than rows) or 'tall' (has more rows than columns). + +Given a matrix $A$ , the operator norm $\| A \|_2$ is the maximum singular value: $\| A \|_2 = \sigma_{\max}(A)$ . + +Projectors: Given a subspace $R$ of $\mathbb{R}^d$ , let $\Pi_R$ denote the orthogonal projection onto $R$ , satisfying that for all $x \in \mathbb{R}^d$ : + +$$ +\Pi_ {R} (x) \in R \text {a n d} \forall r \in R, \| x - \Pi_ {R} (x) \| _ {2} \leq \| x - r \| _ {2}. \tag {A.1} +$$ + +If $E \in \mathbb{R}^{d \times \dim(R)}$ has orthonormal columns that form a basis for $R$ , then we have: + +$$ +\Pi_ {R} = E E ^ {\top} \tag {A.2} +$$ + +From this we can easily check that $\Pi_R^2 = \Pi_R$ and $\Pi_R^\top = \Pi_R$ . See e.g., Chapter 2.5.1 Golub & Loan (2013) for more information. + +Principal Angles: Given two non-zero vectors $x$ and $y$ , the cosine of the angle between them, $\cos \theta$ , is: + +$$ +\cos \theta = \frac {x ^ {\top} y}{\| x \| _ {2} \| y \| _ {2}} \tag {A.3} +$$ + +If we consider the 1-dimensional subspaces (so basically lines) $S_{x}$ and $S_{y}$ spanned by $x$ and $y$ respectively, then the angle between them, $\cos \theta'$ is given by the absolute value (since lines are undirected): + +$$ +\cos \theta^ {\prime} = \frac {\left| x ^ {\top} y \right|}{\left\| x \right\| _ {2} \left\| y \right\| _ {2}} \tag {A.4} +$$ + +Principal angles generalize this notion to higher dimensions. See e.g., Chapter 6.4.3 in Golub & Loan (2013) for more information on principal angles. + +Definition A.1. Given two non-empty subspaces $R$ and $S$ of $\mathbb{R}^d$ , where $r = \min(\dim(R), \dim(S))$ , we have $r$ principal angles: + +$$ +0 \leq \theta_ {1} \leq \dots \leq \theta_ {r} \leq \pi / 2. \tag {A.5} +$$ + +The directions of the inequalities swap when we take the cosine of the principal angles: + +$$ +1 \geq \cos \theta_ {1} \geq \dots \geq \cos \theta_ {r} \geq 0. \tag {A.6} +$$ + +The cosines of the principal angles are given by the SVD—let $E \in \mathbb{R}^{d \times \dim(R)}$ and $F \in \mathbb{R}^{d \times \dim(S)}$ have orthonormal columns which span $R$ and $S$ respectively. Then we have: + +$$ +\cos \theta_ {i} = \sigma_ {i} \left(E ^ {\top} F\right), \tag {A.7} +$$ + +where $\sigma_{i}$ denotes the $i$ -th largest singular value. In this paper, we are interested in the cosine of the largest angle between them, given by: + +$$ +\cos \theta_ {\max } (R, S) = \cos \theta_ {r} \tag {A.8} +$$ + +We can massage this into a variational characterization of the maximum principal angle, which is important for lower bounding the error of fine-tuning outside the span of the training data. + +Lemma A.2. Suppose $\dim(R) \leq \dim(S)$ , and let $F \in \mathbb{R}^{d \times \dim(S)}$ have orthonormal columns that form a basis for $S$ . We have: + +$$ +\cos \theta_ {\max } (R, S) = \min _ {r \in R, \| r \| _ {2} = 1} \| F ^ {\top} (r) \| _ {2} \tag {A.9} +$$ + +Proof. Let $E \in \mathbb{R}^{d \times \dim(R)}$ and $F \in \mathbb{R}^{d \times \dim(S)}$ have orthonormal columns that span $R$ and $S$ respectively. Since $\dim(R) \leq \dim(S)$ (a crucial condition!), $F^{\top}E$ is a 'tall' matrix (it has more rows than columns) so we have: + +$$ +\sigma_ {\min } \left(F ^ {\top} E\right) = \min _ {\| v \| _ {2} = 1} \| F ^ {\top} E v \| _ {2}. \tag {A.10} +$$ + +The result now follows from some algebra: + +$$ +\begin{array}{l} \cos \theta_ {\max } (R, S) = \sigma_ {\min } \left(F ^ {\top} E\right) (A.11) \\ = \min _ {\| v \| _ {2} = 1} \| F ^ {\top} E v \| _ {2} (A.12) \\ = \min _ {r \in R, \| r \| _ {2} = 1} \| F ^ {\top} (r) \| _ {2}. (A.13) \\ \end{array} +$$ + +![](images/e98603165912ebbde02d41751c23071a380dfab94e2e5a4a666f8ca42014c6e4.jpg) + +# A.2 FEATURE DISTORTION THEOREM + +We first prove our core theorem, that fine-tuning distorts pretrained features. + +Restatement of Theorem 3.2. In the overparameterized linear setting, let $S^{\perp} = \text{rowspace}(X)^{\perp}$ , $R_0 = \text{rowspace}(B_0)$ , and $v_{\star}, B_{\star}$ be the optimal parameters with $w_{\star} = B_{\star}v_{\star}$ . If $\cos \theta_{\max}(R_0, S^{\perp}) > 0$ , then for all time steps $t$ , the OOD error of the fine-tuning iterates $(B_{\mathrm{ft}}(t), v_{\mathrm{ft}}(t))$ is lower bounded: + +$$ +\sqrt {L _ {\mathrm {o o d}} \left(v _ {\mathrm {f t}} (t) , B _ {\mathrm {f t}} (t)\right)} \geq \sqrt {\sigma_ {\min } (\Sigma)} \left(\frac {\cos \theta_ {\max } \left(R _ {0} , S ^ {\perp}\right)}{\sqrt {k}} \frac {\min \left(\varphi , \varphi^ {2} / \| w _ {\star} \| _ {2}\right)}{\left(1 + \| w _ {\star} \| _ {2}\right) ^ {2}} - \epsilon\right), \tag {A.14} +$$ + +where $\varphi^2 = |(v_0^\top v_\star)^2 - (v_\star^\top v_\star)^2|$ is defined to be initial head alignment error and $\epsilon \geq d(B_0, B_\star)$ is the error in the pretrained feature extractor. + +We follow the sketch in the main paper. We begin with a few lemmas, showing that certain quantities are preserved throughout the fine-tuning process. + +Our first lemma says that the representations $B_{ft}^{t}x$ do not change for examples perpendicular to the span of the training examples. Note that the final output $v_{ft}^{t}\top B_{ft}^{t}x$ still changes, because $v_{ft}^{t}$ changes. + +Lemma A.3. For all times $t$ and all $x \in S^{\perp}$ , we have: + +$$ +B _ {0} x = B _ {f t} ^ {t} x \tag {A.15} +$$ + +Proof. We initialized fine-tuning with the feature extractor $B_{\mathrm{ft}}(0) = B_0$ . It suffices to show that $\partial_t B_{ft}^t x = 0$ for all $x \in S^\perp$ . Recall that $\partial_t B_{ft}^t$ is given by the gradient flow update equation: + +$$ +\partial_ {t} B _ {f t} ^ {t} = - \partial_ {B} \widehat {L} \left(v _ {f t} ^ {t}, B _ {f t} ^ {t}\right) = - \partial_ {B} \| X B ^ {\top} v - Y \| _ {2} ^ {2} \tag {A.16} +$$ + +Computing the RHS explicitly using multivariable chain rule, we get: + +$$ +\partial_ {t} B _ {f t} ^ {t} = - 2 v \left(X B ^ {\top} v - Y\right) ^ {\top} X \tag {A.17} +$$ + +Since $x$ is a constant, we get: + +$$ +\partial_ {t} B _ {f t} ^ {t} x = - 2 v \left(X B ^ {\top} v - Y\right) ^ {\top} X x \tag {A.18} +$$ + +But $Xx = 0$ for $x \in S^{\perp}$ , since $x \in S^{\perp}$ is defined as $x$ is perpendicular to the rowspace of $X$ (i.e., perpendicular to the rows of $X$ ). So the RHS is 0—that is, $\partial_t B_{ft}^t x = 0$ , as desired. + +Next, we show that the change in the head and feature extractor are 'coupled'. So if the head changes in a certain way, then the feature extractor cannot just stay the same. In the literature, this is sometimes called the "balancedness" lemma, and has been proved in prior work on two layer linear networks. + +Lemma A.4. For all $t$ we have: + +$$ +v _ {0} v _ {0} ^ {\top} - B _ {0} B _ {0} ^ {\top} = v _ {f t} ^ {t} v _ {f t} ^ {t} ^ {\top} - B _ {f t} ^ {t} B _ {f t} ^ {t} ^ {\top} \tag {A.19} +$$ + +Proof. This follows by showing that the derivative is 0: + +$$ +\partial_ {t} \left[ v _ {f t} ^ {t} v _ {f t} ^ {t} ^ {\top} - B _ {f t} ^ {t} B _ {f t} ^ {t} ^ {\top} \right] = 0 \tag {A.20} +$$ + +Which can be verified by direct calculation. See Theorem 2.2 in Du et al. (2018) and the proof of Theorem 1 in Arora et al. (2018). + +For our proof we will require that every feature $r \in R$ can be generated from some OOD direction, that is $r = B_0 u$ for some $u \in S^\perp$ . We will show that this is implied by the condition on the principal angle: $\cos \theta_{\max}(R, S^\perp) > 0$ where $R = \mathrm{rowspace}(B_0)$ , which we assumed in Theorem 3.2. The following lemma shows this (and also quantifies that the norm of $u$ does not shrink too much when projected onto $R$ ). + +Lemma A.5. Let $R, S$ be subspaces of $\mathbb{R}^d$ with $\dim(R) \leq \dim(S)$ . For all $r \in R$ with $\|r\|_2 = \cos \theta_{\max}(R, S)$ , there exists $s \in S$ with $\Pi_R(s) = r$ and $\|s\|_2 \leq 1$ . Here $\Pi_R \in \mathbb{R}^{d \times d}$ projects a vector onto $R$ . + +Proof. Let $c = \cos \theta_{\max}(R, S)$ . First, we get rid of an easy case—if $c = 0$ , then we need to show the claim for all $r \in R$ with $\|r\|_2 = c = 0$ , which means $r = 0$ . Then we can just pick $s = 0$ , and $\Pi_R(s) = 0 = r$ and $\|s\|_2 = 0 \leq 1$ . So for the rest of the proof we assume $c > 0$ . + +Consider arbitrary vector $r \in R$ with $\| r \|_2 = c$ . Let $E \in \mathbb{R}^{d \times \dim(S)}, F \in \mathbb{R}^{d \times \dim(R)}$ have orthonormal columns, which form a basis for $R$ and $S$ respectively. + +Step 1: Finding $s$ : Since the columns of $E$ span $R$ , $r = Ez$ for some $z \in \mathbb{R}^{\dim(R)}$ . $c = \sigma_{\min}(E^\top F) > 0$ , which means that $E^\top F \in \mathbb{R}^{\dim(R) \times \dim(S)}$ has rank $\dim(R)$ since $\dim(R) \leq \dim(S)$ —in other words, $E^\top F$ has full column rank since the column dimension is smaller than the row dimension. So $z = E^\top Fw$ for some $w \in \mathrm{rowspace}(E^\top F)$ . Then we set $s = Fw$ —this means $s \in S$ because the columns of $F$ form a basis for $S$ . In addition, following the steps above we have $r = Ez = EE^\top Fw = EE^\top s$ . We note that $\Pi_R = EE^\top$ is the projection onto $R$ (see e.g., Chapter 2.5.1 of Golub & Loan (2013)). + +Step 2: Bounding norm of $s$ : It suffices to show that $\| s \|_2 \leq 1$ . Since $F$ has orthonormal columns, $\| s \|_2 = \| F w \|_2 = \| w \|_2$ , so it suffices to show that $\| w \|_2 \leq 1$ . Since $E$ has orthonormal columns, $\| r \|_2 = \| z \|_2$ . Recall that $z = E^\top F w$ since $w \in \mathrm{rowspace}(E^\top F)$ , from Lemma A.6 we have: + +$$ +\left\| z \right\| _ {2} \geq \sigma_ {\min } \left(E ^ {\top} F\right) \| w \| _ {2} = c \| w \| _ {2}. \tag {A.21} +$$ + +Rearranging, we get $\| w\| _2\leq \| z\| _2 / c = 1$ , as desired. + +![](images/78874e3c019478aacc8fc51dc05ba948a12eaf99c2dc1a148c16a6b8b7179810.jpg) + +In the lemma above, we used a standard linear algebraic result that we include for completeness. This says that $A$ cannot shrink vectors in its rowspace too much, where the shrinkage factor is given by the minimum singular value of $A$ . + +Lemma A.6. Let $A \in \mathbb{R}^{m \times n}$ . Let $r = \min(m, n)$ . Then if $x \in \text{rowspace}(A)$ , we have $\|Ax\|_2 \geq \sigma_r(A)\|x\|_2$ . + +Proof. We bound the norm of $x$ using the SVD. Consider the singular value decomposition (SVD) of $A$ : + +$$ +A = U D V ^ {\top} \tag {A.22} +$$ + +Where $U \in \mathbb{R}^{m \times r}$ , $D \in \mathbb{R}^{r \times r}$ , $V^{\top} \in \mathbb{R}^{r \times n}$ , where $U$ and $V$ have orthonormal columns, and $D = \mathrm{diag}(\sigma_1, \dots, \sigma_r)$ is a diagonal matrix with $\sigma_1 \geq \dots \geq \sigma_r \geq 0$ . + +$$ +\begin{array}{l} \| A x \| _ {2} = \| U D V ^ {\top} x \| _ {2} \quad [ \text {D e f i n i t i o n} r ] (A.23) \\ = \| D V ^ {\top} x \| _ {2} \quad [ U \in \mathbb {R} ^ {m \times r} \text {h a s o r t h o n o r m a l c o l u m n s} ] (A.24) \\ \geq \sigma_ {r} \| V ^ {\top} x \| _ {2} \quad [ D \text {i s d i a g o n a l} ] (A.25) \\ = \sigma_ {r} \| x \| _ {2} \quad [ \text {r o w s o f} V ^ {\top} \text {a r e o r t h o n o r m a l ,} x \text {i s i n r o w s p a c e} ] (A.26) \\ = \sigma_ {r} (A) \| x \| _ {2} (A.27) \\ \end{array} +$$ + +Where for the fourth step, we used the fact that if $x \in \mathrm{rowspace}(V^{\top})$ and the rows of $V^{\top}$ are orthonormal, then $\| V^{\top}x\|_{2} = \| x\|_{2}$ . One way to see this is by writing $x = \sum_{i}\alpha_{i}v_{i}$ , where $v_{i}$ are rows of $V^{\top}$ , and then noting that $V^{\top}x = (\alpha_{1},\dots,\alpha_{r})$ and so $x$ and $V^{\top}x$ have the same norm. + +We recall that $P_{\mathrm{odd}}$ has second moment $\Sigma$ : $\mathbb{E}[xx^{\top}] = \Sigma$ when $x \sim P_{\mathrm{odd}}$ , where $\Sigma$ is invertible. So with some simple algebra we can write the OOD error $L_{\mathrm{odd}}$ in terms of $\Sigma$ (the proof is standard and basic, but we include it just for completeness): + +# Lemma A.7. + +$$ +L _ {\mathbf {o o d}} (v, B) = \left(B _ {\star} ^ {\top} v _ {\star} - B ^ {\top} v\right) ^ {\top} \Sigma \left(B _ {\star} ^ {\top} v _ {\star} - B ^ {\top} v\right) \leq \sigma_ {\min } (\Sigma) \| B _ {\star} ^ {\top} v _ {\star} - B ^ {\top} v \| _ {2} ^ {2}. \tag {A.28} +$$ + +Proof. Let $x \sim P_{\mathrm{odd}}$ . We have, + +$$ +\begin{array}{l} L _ {\mathrm {o o d}} (v, B) = \mathbb {E} \left[ \left(v _ {\star} ^ {\top} B _ {\star} x - v ^ {\top} B x\right) ^ {2} \right] (A.29) \\ = \mathbb {E} \left[ \left(B _ {\star} ^ {\top} v _ {\star} - B ^ {\top} v\right) ^ {\top} x x ^ {\top} \left(B _ {\star} ^ {\top} v _ {\star} - B ^ {\top} v\right) \right] (A.30) \\ = \left(B _ {\star} ^ {\top} v _ {\star} - B ^ {\top} v\right) ^ {\top} \mathbb {E} \left[ x x ^ {\top} \right] \left(B _ {\star} ^ {\top} v _ {\star} - B ^ {\top} v\right) (A.31) \\ = \left(B _ {\star} ^ {\top} v _ {\star} - B ^ {\top} v\right) ^ {\top} \Sigma \left(B _ {\star} ^ {\top} v _ {\star} - B ^ {\top} v\right). (A.32) \\ \end{array} +$$ + +The inequality follows immediately because $\sigma_{\mathrm{min}}(A)$ (for a square matrix $A$ ) is simply the min over $x$ with unit $\ell_2$ norm of $x^\top Ax$ . + +We now prove Theorem 3.2, following the 3 steps outlined in the main text. + +Proof of Theorem 3.2. Let $c = \cos \theta_{\max}(R, S^{\perp})$ . From Lemma A.7, we have $L_{\mathrm{odd}}(v_{ft}^{t}, B_{ft}^{t}) \leq \sigma_{\min}(\Sigma) \| B_{\star}^{\top} v_{\star} - B_{ft}^{t} \top v_{ft}^{t} \|_{2}^{2}$ so it suffices to bound $\| B_{\star}^{\top} v_{\star} - B_{ft}^{t} \top v_{ft}^{t} \|_{2}$ . + +Because it makes the proof much easier, we will prove the contrapositive, and then convert back to the original theorem statement. We assume $\| B_{\star}^{\top}v_{\star} - B_{ft}^{t}\top v_{ft}^{t}\|_{2}\leq \Delta$ , and will show that: + +$$ +\left| \left(v _ {0} ^ {\top} v _ {\star}\right) ^ {2} - \left(v _ {\star} ^ {\top} v _ {\star}\right) ^ {2} \right| \leq \frac {\Delta + \epsilon}{c} g _ {1} \left(\| w \| _ {2}\right) \sqrt {k} + \frac {\left(\Delta + \epsilon\right) ^ {2}}{c ^ {2}} g _ {2} \left(\| w \| _ {2}\right) k \tag {A.33} +$$ + +Where $g_{1}$ and $g_{2}$ are non-negative polynomials we will bound in the proof. + +We gave a basic outline of the proof in the main paper, and here we are just trying to be careful about capturing all the dependencies. We also give intuition for each step before diving into algebra (which we include for completeness). + +Recall that in the overparameterized linear setting we assumed we have orthonormal $B_0$ with $\| B_0 - UB_\star \|_2 \leq \epsilon$ for some $U$ . We note that the setup is rotationally symmetric so without loss of generality we can suppose $\| B_0 - B_\star \|_2 \leq \epsilon$ . This is because we can let $B_\star' = UB_\star$ and $v_\star' = Uv_\star$ , and we have $w_\star = B_\star^\top v_\star = (UB_\star)^\top (Uv_\star)$ , where $w_\star$ is the optimal classifier—so we can now write the entire proof in terms of $B_\star'$ and $v_\star'$ . + +Step 1: Show that $\| v_{ft}^t - v_\star \|_2 \leq \Delta / c$ : We first give intuition and then dive into the math. The key insight is to use the fact that in 'many' directions $B_{ft}^t$ and $B_0$ are the same (formally, for all $x \in S^\perp$ , $B_{ft}^t x = B_0 x$ ). But $B_0$ and $B_\star$ are close by assumption, which means that $B_{ft}^t$ and $B_\star$ are close in 'many' directions. Then since we assumed in the contrapositive that $v_{ft}^t \top B_{ft}^t$ and $v_\star \top B_\star$ are close, we get that $v_{ft}^t$ and $v_\star$ are close in 'many' directions. Because $S^\perp$ covers the rowspace of $B_0$ , we get that 'many' is $k$ , which is precisely the dimensionality of $v_\star$ , so the two vectors $v_{ft}^t$ and $v_\star$ must be close. + +We now dive into the math. Since $B_0$ has orthogonal rows, $B_0$ has full column rank. + +Let $z$ be given by: + +$$ +z = \frac {c}{\left\| v _ {f t} ^ {t} - v _ {\star} \right\| _ {2}} \left(v _ {f t} ^ {t} - v _ {\star}\right) \tag {A.34} +$$ + +We note that $\| z \|_2 = c$ . Then, we can find $y \in R = \text{rowspace}(B_0)$ such that $B_0y = z$ (since $B_0$ has full column-rank) and then $\| y \|_2 = \| z \|_2 = c$ (since $B_0$ has orthonormal rows). + +Since $c = \cos \theta_{\max}(R, S^{\perp}) > 0$ , and $y \in R$ with $\|y\| = c$ , from Lemma A.5 we can choose $x \in S^{\perp}$ with $\|x\|_2 \leq 1$ and $\Pi_R(x) = y$ . Then, we have $B_0x = z$ . + +From Proposition A.3, since $x \in S^{\perp}$ , $B_0$ does not change in directions of $x$ when fine-tuning so we have: $B_0x = B_{ft}^t x$ . + +The claim now follows from simple algebraic manipulation, following the intuition we described. The algebra just captures what 'close' means and adds up the error terms. + +$$ +\begin{array}{l} \left\| v _ {f t} ^ {t} - v _ {\star} \right\| _ {2} = \frac {1}{c} \left(v _ {f t} ^ {t} - v _ {\star}\right) ^ {\top} \left(\frac {c \left(v _ {f t} ^ {t} - v _ {\star}\right)}{\left\| v _ {f t} ^ {t} - v _ {\star} \right\| _ {2}}\right) (A.35) \\ = \frac {1}{c} \left(v _ {f t} ^ {t} - v _ {\star}\right) ^ {\top} z \quad [ \text {D e f i n i t i o n} z ] (A.36) \\ = \frac {1}{c} \left(v _ {f t} ^ {t} - v _ {\star}\right) ^ {\top} B _ {0} x \quad [ \text {S i n c e} B _ {0} x = z ] (A.37) \\ = \frac {1}{c} \left(v _ {f t} ^ {t} ^ {\top} B _ {0} x - v _ {\star} ^ {\top} B _ {0} x\right) \quad [ \text {A l g e b r a} ] (A.38) \\ = \frac {1}{c} \left(v _ {f t} ^ {t} ^ {\top} B _ {f t} ^ {t} x - v _ {\star} ^ {\top} B _ {0} x\right) \quad [ B _ {f t} ^ {t} x = B _ {0} x \text {s i n c e} x \in S ^ {\perp} ] (A.39) \\ = \frac {1}{c} \left(v _ {f t} ^ {t} ^ {\top} B _ {f t} ^ {t} - v _ {\star} ^ {\top} B _ {0}\right) x \quad [ \text {A l g e b r a} ] (A.40) \\ \leq \frac {1}{c} \| v _ {f t} ^ {t} ^ {\top} B _ {f t} ^ {t} - v _ {\star} ^ {\top} B _ {0} \| _ {2} \| x \| _ {2} \quad [ \text {C a u c h y - S c h w a r z} ] (A.41) \\ \leq \frac {1}{c} \left\| v _ {f t} ^ {t} ^ {\top} B _ {f t} ^ {t} - v _ {\star} ^ {\top} B _ {0} \right\| _ {2} \quad [ \text {s i n c e} \| x \| _ {2} \leq 1 ] (A.42) \\ \leq \frac {1}{c} \left\| v _ {f t} ^ {t} ^ {\top} B _ {f t} ^ {t} - v _ {\star} ^ {\top} B _ {\star} \right\| _ {2} + \frac {1}{c} \left\| v _ {\star} ^ {\top} B _ {\star} - v _ {\star} ^ {\top} B _ {0} \right\| _ {2} \quad [ \text {T r i a n g l e i n e q u a l i t y} ] (A.43) \\ \leq \frac {1}{c} \| B _ {f t} ^ {t} ^ {\top} v _ {f t} ^ {t} - B _ {\star} ^ {\top} v _ {\star} \| _ {2} + \frac {1}{c} \| v _ {\star} ^ {\top} B _ {\star} - v _ {\star} ^ {\top} B _ {0} \| _ {2} \quad [ \text {T a k i n g t r a n s p o s e} ] (A.44) \\ = \frac {1}{c} \| B _ {f t} ^ {t} ^ {\top} v _ {f t} ^ {t} - B _ {\star} ^ {\top} v _ {\star} \| _ {2} + \frac {1}{c} \sigma_ {\max } \left(B _ {0} - B _ {\star}\right) \| v _ {\star} \| _ {2} \quad [ \text {d e f i n i t i o n o f m a x s i n g u l a r v a l u e} ] (A.45) \\ = \frac {1}{c} \| B _ {f t} ^ {t} ^ {\top} v _ {f t} ^ {t} - B _ {\star} ^ {\top} v _ {\star} \| _ {2} + \frac {1}{c} \epsilon \| v _ {\star} \| _ {2} \quad [ \text {s i n c e} \sigma_ {\max } \left(B _ {0} - B _ {\star}\right) \leq \epsilon ] (A.46) \\ \leq \frac {\Delta + \epsilon \| v _ {\star} \| _ {2}}{c} \quad [ \text {s i n c e} \| B _ {\star} ^ {\top} v _ {\star} - B _ {f t} ^ {t} ^ {\top} v _ {f t} ^ {t} \| _ {2} \leq \Delta ] (A.47) \\ \end{array} +$$ + +Which shows that $\| v_{ft}^t -v_\star \| _2\leq (\Delta +\epsilon \| v_\star \| _2) / c.$ + +Step 2A: Show that $\| B_{ft}^t\| _F^2$ is small: The key insight is to take the trace on both sides of Proposition A.4, which bounds the Frobenius norm of $B_{ft}^t$ and therefore the operator norm. + +Rearranging Proposition A.4, we have: + +$$ +B _ {f t} ^ {t} B _ {f t} ^ {t} ^ {\top} = B _ {0} B _ {0} ^ {\top} + v _ {\star} v _ {\star} ^ {\top} - v _ {0} v _ {0} ^ {\top} \tag {A.49} +$$ + +Taking the trace everywhere, we get: + +$$ +\operatorname {T r} \left(B _ {f t} ^ {t} B _ {f t} ^ {t} ^ {\top}\right) = \operatorname {T r} \left(B _ {0} B _ {0} ^ {\top}\right) + \operatorname {T r} \left(v _ {\star} v _ {\star} ^ {\top}\right) - \operatorname {T r} \left(v _ {0} v _ {0} ^ {\top}\right) \tag {A.50} +$$ + +For any matrix $A$ , $\mathrm{Tr}(AA^{\top}) = \| A\| _F^2$ , and for a vector $v$ the Frobenius norm is just the $\ell_2$ -norm, so $\mathrm{Tr}(vv^{\top}) = \| v\| _2^2$ . So we have: + +$$ +\left\| B _ {f t} ^ {t} \right\| _ {F} ^ {2} = \left\| B _ {0} \right\| _ {F} ^ {2} + \left\| v _ {\star} \right\| _ {2} ^ {2} - \left\| v _ {0} \right\| _ {2} ^ {2} \tag {A.51} +$$ + +Squares are non-negative, so we get the inequality: + +$$ +\left\| B _ {f t} ^ {t} \right\| _ {F} ^ {2} \leq \left\| B _ {0} \right\| _ {F} ^ {2} + \left\| v _ {\star} \right\| _ {2} ^ {2} \tag {A.52} +$$ + +Step 2B: Show that $\| B_0^\top v_\star \|_2^2 - \| B_{ft}^t \bar{v}_\star \|_2^2$ is small: This step doesn't involve much insight, and is standard peturbation analysis—we simply factor the difference of squares and bound each term. + +First, we bound $\| B_{ft}^t\top v_{ft}^t -B_{ft}^t\top v_\star \| _2$ + +$$ +\begin{array}{l} \left\| B _ {f t} ^ {t} ^ {\top} v _ {f t} ^ {t} - B _ {f t} ^ {t} ^ {\top} v _ {\star} \right\| _ {2} \leq \sigma_ {\max } \left(B _ {f t} ^ {t}\right) \| v _ {f t} ^ {t} - v _ {\star} \| _ {2} (A.53) \\ \leq \left\| B _ {f t} ^ {t} \right\| _ {F} \left\| v _ {f t} ^ {t} - v _ {\star} \right\| _ {2} (A.54) \\ \leq \sqrt {\| B _ {0} \| _ {F} ^ {2} + \| v _ {\star} \| _ {2} ^ {2}} \| v _ {f t} ^ {t} - v _ {\star} \| _ {2} (A.55) \\ \leq \sqrt {\left\| B _ {0} \right\| _ {F} ^ {2} + \left\| v _ {\star} \right\| _ {2} ^ {2}} \left(\frac {\Delta + \epsilon \| v _ {\star} \| _ {2}}{c}\right) (A.56) \\ \end{array} +$$ + +Next, we bound $\| B_0^\top v_\star - B_{ft}^t^\top v_\star \|_2$ : + +$$ +\begin{array}{l} \left\| B _ {0} ^ {\top} v _ {\star} - B _ {f t} ^ {t} ^ {\top} v _ {\star} \right\| _ {2} \leq \left\| B _ {0} ^ {\top} v _ {\star} - B _ {\star} ^ {\top} v _ {\star} \right\| _ {2} + \left\| B _ {\star} ^ {\top} v _ {\star} - B _ {f t} ^ {t} ^ {\top} v _ {\star} \right\| _ {2} (A.57) \\ \leq \sigma_ {\max } \left(B _ {0} - B _ {\star}\right) \| v _ {\star} \| _ {2} + \| B _ {\star} ^ {\top} v _ {\star} - B _ {f t} ^ {t} ^ {\top} v _ {\star} \| _ {2} (A.58) \\ \leq \epsilon \| v _ {\star} \| _ {2} + \| B _ {\star} ^ {\top} v _ {\star} - B _ {f t} ^ {t} ^ {\top} v _ {\star} \| _ {2} (A.59) \\ \leq \epsilon \| v _ {\star} \| _ {2} + \| B _ {\star} ^ {\top} v _ {\star} - B _ {f t} ^ {t} ^ {\top} v _ {f t} ^ {t} \| _ {2} + \| B _ {f t} ^ {t} ^ {\top} v _ {f t} ^ {t} - B _ {f t} ^ {t} ^ {\top} v _ {\star} \| _ {2} (A.60) \\ \leq \epsilon \| v _ {\star} \| _ {2} + \Delta + \sqrt {\| B _ {0} \| _ {F} ^ {2} + \| v _ {\star} \| _ {2} ^ {2}} \left(\frac {\Delta + \epsilon \| v _ {\star} \| _ {2}}{c}\right) (A.61) \\ =: \Delta_ {2} (A.62) \\ \end{array} +$$ + +Finally, we bound $||B_0^\top v_\star ||_2^2 -||B_{ft}^t\top v_\star ||_2^2 |$ , using the identity: + +$$ +\begin{array}{l} \left| \left| \left| u \right| \right| _ {2} ^ {2} - \left| \left| v \right| \right| _ {2} ^ {2} \right| = \left| (u - v) ^ {\top} (u + v) \right| (A.63) \\ \leq \| u - v \| _ {2} \| u + v \| _ {2} (A.64) \\ \leq \| u - v \| _ {2} \left(2 \| u \| _ {2} + \| u - v \| _ {2}\right) (A.65) \\ \end{array} +$$ + +Applying this: + +$$ +\begin{array}{l} \left. \left| \left| \left| B _ {0} ^ {\top} v _ {\star} \right| \right| _ {2} ^ {2} - \left| \left| B _ {f t} ^ {t} \right| ^ {\top} v _ {\star} \right| \right| _ {2} ^ {2} \right| \leq \left| \left| B _ {0} ^ {\top} v _ {\star} - B _ {f t} ^ {t} \right| ^ {\top} v _ {\star} \right| _ {2} \left(2 \left| \left| B _ {0} ^ {\top} v _ {\star} \right| \right| _ {2} + \left| \left| B _ {0} ^ {\top} v _ {\star} - B _ {f t} ^ {t} \right| ^ {\top} v _ {\star} \right| _ {2}\right) (A.66) \\ \leq \Delta_ {2} \left(2 \| B _ {0} ^ {\top} v _ {\star} \| _ {2} + \Delta_ {2}\right) (A.67) \\ \leq \Delta_ {2} \left(2 \| B _ {\star} ^ {\top} v _ {\star} \| _ {2} + 2 \| B _ {0} ^ {\top} v _ {\star} - B _ {\star} ^ {\top} v _ {\star} \| _ {2} + \Delta_ {2}\right) (A.68) \\ \leq \Delta_ {2} \left(2 \| w _ {\star} \| _ {2} + 2 \epsilon \| v _ {\star} \| _ {2} + \Delta_ {2}\right) (A.69) \\ =: \Delta_ {3} (A.70) \\ \end{array} +$$ + +Step 3: Use Proposition A.4 to show $v_{0}$ and $v_{\star}$ must be close: The key insight is that we start from Proposition A.4, and left and right multiply by $v_{\star}$ , after that we use the previous steps and do some some standard perturbation analysis. + +We start from Proposition A.4: + +$$ +v _ {0} v _ {0} ^ {\top} - B _ {0} B _ {0} ^ {\top} = v _ {f t} ^ {t} v _ {f t} ^ {t} ^ {\top} - B _ {f t} ^ {t} B _ {f t} ^ {t} ^ {\top} \tag {A.71} +$$ + +The key step is to left multiply both sides by $v_{\star}^{\top}$ and right multiply both sides by $v_{\star}$ to get: + +$$ +\left(v _ {0} ^ {\top} v _ {\star}\right) ^ {2} - \left\| B _ {0} ^ {\top} v _ {\star} \right\| _ {2} ^ {2} = \left(v _ {f t} ^ {t} ^ {\top} v _ {\star}\right) ^ {2} - \left\| B _ {f t} ^ {t} ^ {\top} v _ {\star} \right\| _ {2} ^ {2} \tag {A.72} +$$ + +Rearranging, and then using Equation A.66, we get: + +$$ +\left| \left(v _ {f t} ^ {t} ^ {\top} v _ {\star}\right) ^ {2} - \left(v _ {0} ^ {\top} v _ {\star}\right) ^ {2} \right| = \left| \| B _ {f t} ^ {t} ^ {\top} v _ {\star} \| _ {2} ^ {2} - \| B _ {0} ^ {\top} v _ {\star} \| _ {2} ^ {2} \right| \leq \Delta_ {3} \tag {A.73} +$$ + +This is close to what we want, except we have $(v_{ft}^{t}\top v_{\star})^{2}$ on the LHS instead of $(v_{\star}^{\top}v_{\star})^{2}$ . We previously showed that $v_{ft}^{t}$ and $v_{\star}$ are close, in Step 1, so with some algebra we can bound the + +difference between $(v_{ft}^{t})^{\top}v_{\star})^{2}$ and $(v_{\star}^{\top}v_{\star})^{2}$ : + +$$ +\begin{array}{l} \left| \left(v _ {f t} ^ {t} ^ {\top} v _ {\star}\right) ^ {2} - \left(v _ {\star} ^ {\top} v _ {\star}\right) ^ {2} \right| = \left| \left(v _ {f t} ^ {t} ^ {\top} v _ {\star} - v _ {\star} ^ {\top} v _ {\star}\right) ^ {\top} \left(v _ {f t} ^ {t} ^ {\top} v _ {\star} + v _ {\star} ^ {\top} v _ {\star}\right) \right| (A.74) \\ = \left| \left(v _ {f t} ^ {t} ^ {\top} v _ {\star} - v _ {\star} ^ {\top} v _ {\star}\right) ^ {\top} \left[ 2 v _ {\star} ^ {\top} v _ {\star} + \left(v _ {f t} ^ {t} ^ {\top} v _ {\star} - v _ {\star} ^ {\top} v _ {\star}\right) \right] \right| (A.75) \\ = \left| \left(v _ {\star} ^ {\top} \left(v _ {f t} ^ {t} - v _ {\star}\right)\right) ^ {\top} \left[ 2 v _ {\star} ^ {\top} v _ {\star} + \left(v _ {\star} ^ {\top} \left(v _ {f t} ^ {t} - v _ {\star}\right)\right) \right] \right| (A.76) \\ \leq \left\| v _ {f t} ^ {t} - v _ {\star} \right\| _ {2} \left\| v _ {\star} \right\| _ {2} ^ {2} \left[ 2 \| v _ {\star} \| _ {2} + \left\| v _ {f t} ^ {t} - v _ {\star} \right\| _ {2} \right] (A.77) \\ = (\Delta / c) \| v _ {\star} \| _ {2} ^ {2} (2 \| v _ {\star} \| _ {2} + (\Delta / c)) := \Delta_ {4} (A.78) \\ \end{array} +$$ + +Above, from the third line to the fourth line, we used triangle inequality and Cauchy-Schwarz. + +So finally, by triangle-inequality we can now bound $|(v_{\star}^{\top} v_{\star})^{2} - (v_{0}^{\top} v_{\star})^{2}|$ : + +$$ +\begin{array}{l} \left| \left(v _ {\star} ^ {\top} v _ {\star}\right) ^ {2} - \left(v _ {0} ^ {\top} v _ {\star}\right) ^ {2} \right| \leq \left| \left(v _ {\star} ^ {\top} v _ {\star}\right) ^ {2} - \left(v _ {f t} ^ {t} ^ {\top} v _ {\star}\right) ^ {2} \right| + \left| \left(v _ {f t} ^ {t} ^ {\top} v _ {\star}\right) ^ {2} - \left(v _ {0} ^ {\top} v _ {\star}\right) ^ {2} \right| (A.79) \\ \leq \Delta_ {4} + \Delta_ {3} (A.80) \\ \end{array} +$$ + +Wrap up i.e., writing out $\Delta_4 + \Delta_3$ explicitly: This is basically the bound we want, but we would like to express $\Delta_3, \Delta_4$ in terms of $\Delta$ and $\epsilon$ . Note that this step has no insight, and is just algebra—we include the details for reference and verifiability. We recall: + +$$ +\Delta_ {4} = (\Delta / c) \| v _ {\star} \| _ {2} ^ {2} (2 \| v _ {\star} \| _ {2} + (\Delta / c)) \tag {A.81} +$$ + +$$ +\Delta_ {3} = \Delta_ {2} \left(2 \| w _ {\star} \| _ {2} + 2 \epsilon \| v _ {\star} \| _ {2} + \Delta_ {2}\right) \tag {A.82} +$$ + +$$ +\Delta_ {2} = \epsilon \| v _ {\star} \| _ {2} + \Delta + \sqrt {\| B _ {0} \| _ {F} ^ {2} + \| v _ {\star} \| _ {2} ^ {2}} \left(\frac {\Delta + \epsilon \| v _ {\star} \| _ {2}}{c}\right) \tag {A.83} +$$ + +Since $B_0$ has orthogonal rows (by assumption), $B_0^\top$ has orthogonal columns, so $\| w_\star \|_2 = \| B_0^\top v_\star \|_2 = \| v_\star \|_2$ . In addition, since $B_0$ has $k$ orthogonal rows, $\| B_0 \|_F = \sqrt{k}$ . We also note that $\sqrt{\|B_0\|_F^2 + \|v_\star\|^2} \leq \| B_0 \|_F + \| v_\star \|_2 = \sqrt{k} + \| w_\star \|_2$ . Since $c \leq 1$ , we have: + +$$ +\epsilon \| v _ {\star} \| _ {2} + \Delta \leq \left(\frac {\Delta + \epsilon \| v _ {\star} \| _ {2}}{c}\right) \tag {A.84} +$$ + +So for $\Delta_2$ , up to constant factors we can ignore the $\epsilon \| v_{\star} \|_2 + \Delta$ term—this means we get: + +$$ +\Delta_ {2} \leq O \left(\left(\sqrt {k} + \| w _ {\star} \| _ {2}\right) \left(\frac {\Delta + \epsilon \| w _ {\star} \| _ {2}}{c}\right)\right) \tag {A.85} +$$ + +Using the fact that $\sqrt{k} +\| w_{\star}\| _2\leq \sqrt{k} (1 + \| w_{\star}\|)$ we get: + +$$ +\Delta_ {2} \leq O \left(\sqrt {k} \left(1 + \| w _ {\star} \|\right) \left(\frac {\Delta + \epsilon \| w _ {\star} \| _ {2}}{c}\right)\right) \tag {A.86} +$$ + +Then since $\Delta +\epsilon \| w_{\star}\| _2\leq (1 + \| w_{\star}\| _2)(\Delta +\epsilon)$ , we get: + +$$ +\Delta_ {2} \leq O \left(\sqrt {k} \left(1 + \| w _ {\star} \|\right) ^ {2} \left(\frac {\Delta + \epsilon}{c}\right)\right) \tag {A.87} +$$ + +Now for $\Delta_3$ , first note that $\epsilon \leq 2$ , since $B_{\star}$ and $B_0$ have orthogonal normal rows so $\| B_{\star} - B_0\| _2\leq 2$ . This means that $\epsilon \| w_{\star}\| _2\leq \| w_{\star}\| _2$ , so $\Delta_3$ simplifies to: + +$$ +\Delta_ {3} \leq O \left(\Delta_ {2} \left(\| w _ {\star} \| _ {2} + \Delta_ {2}\right)\right) = O \left(\Delta_ {2} \| w _ {\star} \| _ {2} + \Delta_ {2}\right) \tag {A.88} +$$ + +Substituting the bound for $\Delta_2$ into $\Delta_3$ , we get: + +$$ +\Delta_ {3} \leq O \left(\sqrt {k} \| w _ {\star} \| _ {2} (1 + \| w _ {\star} \|) ^ {2} \left(\frac {\Delta + \epsilon \| w _ {\star} \| _ {2}}{c}\right) + k (1 + \| w _ {\star} \|) ^ {4} \left(\frac {\Delta + \epsilon \| w _ {\star} \| _ {2}}{c}\right) ^ {2}\right) \tag {A.89} +$$ + +For $\Delta_4$ , we get: + +$$ +\Delta_ {4} \leq O \left(\| w _ {\star} \| _ {2} ^ {3} \frac {\Delta}{c} + \| w _ {\star} \| _ {2} \left(\frac {\Delta}{c}\right)\right) \tag {A.90} +$$ + +Since $\Delta /c\leq (\Delta +\epsilon) / c$ and $\| w_{\star}\|_{2}^{2}\leq (1 + \| w_{\star}\|_{2})^{2}$ we have for the final error $\Delta_3 + \Delta_4$ + +$$ +\Delta_ {3} + \Delta_ {4} \leq \sqrt {k} w \left(1 + \| w _ {\star} \| _ {2} ^ {2}\right) ^ {2} \left(\frac {\Delta + \epsilon}{c}\right) + k \left(1 + \| w _ {\star} \| _ {2} ^ {2}\right) ^ {4} \left(\frac {\Delta + \epsilon}{c}\right) ^ {2} \tag {A.91} +$$ + +Wrap up i.e., taking the contrapositive: So we've shown that if $\| B_{\star}^{\top}v_{\star} - B_{ft}^{t} \top v_{ft}^{t} \|_{2}^{2} \leq \Delta$ , then: + +$$ +\left| \left(v _ {\star} ^ {\top} v _ {\star}\right) ^ {2} - \left(v _ {0} ^ {\top} v _ {\star}\right) ^ {2} \right| \leq \frac {\Delta + \epsilon}{c} w \left(1 + \| w _ {\star} \| _ {2} ^ {2}\right) ^ {2} \sqrt {k} + \frac {(\Delta + \epsilon) ^ {2}}{c ^ {2}} \left(1 + \| w _ {\star} \| _ {2} ^ {2}\right) ^ {4} k \tag {A.92} +$$ + +We'd like to flip this around: suppose $|(v_{\star}^{\top}v_{\star})^{2} - (v_{0}^{\top}v_{\star})^{2}| \geq \varphi^{2}$ for some $\varphi$ . To lower bound $\| B_{\star}^{\top}v_{\star} - B_{ft}^{t}\top v_{ft}^{t}\|_{2}^{2}$ , we simply take the contrapositive of what we have proved. Let $\Delta$ be given by: + +$$ +\Delta = \min \left(\frac {c}{w \left(1 + \| w _ {\star} \| _ {2} ^ {2}\right) ^ {2} \sqrt {k}} \varphi^ {2}, \frac {c}{\sqrt {\left(1 + \| w _ {\star} \| _ {2} ^ {2}\right) ^ {4} k}} \varphi\right) - \epsilon \tag {A.93} +$$ + +In this case with some algebra, we can show that: + +$$ +\left| \left(v _ {\star} ^ {\top} v _ {\star}\right) ^ {2} - \left(v _ {0} ^ {\top} v _ {\star}\right) ^ {2} \right| \geq \varphi^ {2} \geq \frac {\Delta + \epsilon}{c} w \left(1 + \| w _ {\star} \| _ {2} ^ {2}\right) ^ {2} \sqrt {k} + \frac {\left(\Delta + \epsilon\right) ^ {2}}{c ^ {2}} \left(1 + \| w _ {\star} \| _ {2} ^ {2}\right) ^ {4} k \tag {A.94} +$$ + +To see this, we bound each of the terms in the RHS separately using our definition of $\Delta$ . Then, from the contrapositive of what we proved (compare with Equation A.92, we get: + +$$ +\left\| B _ {\star} ^ {\top} v _ {\star} - B _ {f t} ^ {t} ^ {\top} v _ {f t} ^ {t} \right\| _ {2} ^ {2} \geq \Delta \tag {A.95} +$$ + +Finally, we can massage $\Delta$ to combine terms and make it look slightly nicer: + +$$ +\Delta \geq \frac {c}{\sqrt {k}} \frac {\min \left(\varphi , \varphi^ {2} / \| w _ {\star} \| _ {2}\right)}{\left(1 + \| w _ {\star} \| _ {2}\right) ^ {2}} - \epsilon \tag {A.96} +$$ + +Then applying Lemma A.7 we get the desired result. For even more interpretability, if $\| w\| _2 = 1$ and $\varphi$ is bounded above by some constant, then you can think of $\Delta$ as approximately $\frac{c}{\sqrt{k}}\varphi^2 -\epsilon$ . This completes the proof. + +# A.3 LP vs. FT (OOD) + +We now prove Theorem 3.4, which compares linear probing and fine-tuning in the linear overparameterized setting, when the ID data lies in a lower dimensional subspace. We first define a distance (formally, a pseudometric) between pretrained feature exactors, which measures the operator norm difference between them up to rotations (e.g., if $B = UB'$ for a rotation matrix $U$ then $B$ and $B'$ are equivalent since this means we can obtain one feature extractor's representations from another just by rotating): + +Definition A.8 (Feature Extractor Distance). The distance between feature extractors $B, B' \in \mathbb{R}^{k \times d}$ (with orthonormal rows) is given by (where the min is over rotation matrices $U \in \mathbb{R}^{k \times k}$ ): + +$$ +d \left(B, B ^ {\prime}\right) = \min _ {U} \| B - U B ^ {\prime} \| _ {2}. \tag {A.97} +$$ + +We state a more precise version of Theorem 3.4—basically we fix all problem parameters except $B_0$ (which limits to $B_\star$ ). To define the limit, we consider a sequence of pretrained feature extractors: $\{B_0^i\}_{i=1}^\infty$ . We define the corresponding limit points of fine-tuning and linear probing when we start from the $i$ -th pretrained feature extractor. That is, let $v_{\mathrm{ft}}^i(t), B_{\mathrm{ft}}^i(t)$ denote the parameters at time $t$ of fine-tuning if we initialize with $v_0, B_0^i$ (see Equation 3.2 for the fine-tuning updates). Let $v_{\mathrm{lp}}^{\infty,i}, B_0^i$ be the linear probing solution when initialized with $v_0, B_0^i$ (see Equation 3.5 for the linear probing updates). We note that the LP iterates converge to $v_{\mathrm{lp}}^{\infty,i}, B_0^i$ as a result of gradient flow on a convex problem. + +Finally, Theorem 3.4 says that as the pretrained representations get better, linear probing does much better than fine-tuning OOD: + +Theorem A.9 (Formal statement of Theorem 3.4). In the linear overparameterized setting, under the ID subspace assumption, fix the dimensions of the setting $d,k,m$ , number of examples $n$ , the ID subspace $S$ , ID distribution $P_{\mathrm{id}}$ , the distribution over the head $v_{0}$ , and the ground truth parameters $v_{\star},B_{\star}$ . Assume the non-degeneracy conditions $\cos \theta_{\max}(R_{*},S) > 0$ and $\cos \theta_{\max}(R_{*},S^{\perp}) > 0$ where $R_{*} = \text{rowspace}(B_{\star})$ . Given a sequence of pretrained feature extractors $\{B_0^i\}_{i=1}^\infty$ with $B_0^i \to B_\star$ , where the limit is in the pseudometric given by Definition A.8, the ratio of OOD errors of linear probing and fine-tuning converges in probability to 0: + +$$ +\frac {L _ {\mathrm {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty i} , B _ {0} ^ {i}\right)}{\inf _ {t \geq 0} L _ {\mathrm {o o d}} \left(v _ {\mathrm {f t}} ^ {i} (t) , B _ {\mathrm {f t}} ^ {i} (t)\right)} \xrightarrow {p} 0, a s i \rightarrow \infty . \tag {A.98} +$$ + +The purpose of the infimum is to capture the fact that the bound holds for all times $t$ for fine-tuning (and therefore also for the limit $v_{\mathrm{ft}}^{\infty}, B_{\mathrm{ft}}^{\infty}$ when it exists). Note that the ratio is a random variable because the training data is sampled from $P_{\mathrm{id}}$ and the head is sampled $(v_0 \sim \mathcal{N}(0, \sigma^2 I)$ for some $\sigma^2$ ). + +Proof. Recall that we say a sequence of real-valued random variables converges in probability to 0 (written as $X_{i} \xrightarrow{p} 0$ ) if for every $\epsilon', \delta > 0$ , for all large enough $i$ (that is, for all $i \geq N_{i}$ for some $N_{i}$ ), we have: + +$$ +P \left(\left| X _ {i} \right| > \epsilon^ {\prime}\right) \leq \delta . \tag {A.99} +$$ + +Accordingly, fix arbitrary $\epsilon', \delta > 0$ , and we will show that the ratio of errors is eventually smaller than $\epsilon'$ with probability at least $1 - \delta$ . + +Lower bounding fine-tuning error: Since $B_0^i \to B_*$ , from Lemma A.11 we have that $\cos \theta_{\max}(R^i, S^\perp) \to \cos \theta_{\max}(R_*, S^\perp)$ where $R^i = \mathrm{rowspace}(B_0^i)$ . Since $\cos \theta_{\max}(R_*, S^\perp) > 0$ , this means that for all large enough $i$ we have: + +$$ +\cos \theta_ {\max } \left(R ^ {i}, S ^ {\perp}\right) > \cos \theta_ {\max } \left(R _ {*}, S ^ {\perp}\right) / 2. \tag {A.100} +$$ + +Next, from Lemma A.13, we have that with probability at least $1 - \delta / 2$ , Head-Error $(v_0, v_\star) = |(v_0^\top v_\star)^2 - (v_\star^\top v_\star)^2| \geq c_\delta$ for some $c_\delta > 0$ . Plugging this into the fine-tuning bound in Theorem 3.2, this means that for all large enough $i$ with probability at least $1 - \delta / 2$ : + +$$ +\inf _ {t \geq 0} \sqrt {L _ {\text {o o d}} \left(v _ {\mathrm {f t}} ^ {i} (t) , B _ {\mathrm {f t}} ^ {i} (t)\right)} \geq c _ {\delta} ^ {\prime} - d \left(B _ {0} ^ {i}, B _ {\star}\right), \tag {A.101} +$$ + +for some $c_{\delta}^{\prime} > 0$ . But since $B_0^i \to B_\star$ we have $d(B_0^i, B_\star) \to 0$ as $i \to \infty$ . So this means that for all large enough $i$ with probability at least $1 - \delta / 2$ : + +$$ +\inf _ {t \geq 0} L _ {\text {o o d}} \left(v _ {\mathrm {f t}} ^ {i} (t), B _ {\mathrm {f t}} ^ {i} (t)\right) \geq c _ {\delta} ^ {\prime \prime}, \tag {A.102} +$$ + +for some $c_{\delta}^{\prime \prime} > 0$ + +Upper bounding the linear probing error: Since $B_0^i \to B_\star$ , from Lemma A.11 we have that $\cos \theta_{\max}(R^i, S) \to \cos \theta_{\max}(R_*, S)$ and so since $\cos \theta_{\max}(R_*, S) > 0$ , for all large enough $i$ we have: + +$$ +\cos \theta_ {\max } \left(R ^ {i}, S\right) > \cos \theta_ {\max } \left(R _ {*}, S\right) / 2. \tag {A.103} +$$ + +Plugging this into the RHS of Lemma A.15, Equation A.133, which upper bounds the OOD error of linear probing, we get that for all large enough $i$ , with probability at least $1 - \delta / 2$ : + +$$ +L _ {\mathbf {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty i}, B _ {0} ^ {i}\right) \leq u _ {\delta} \left(d \left(B _ {0} ^ {i}, B _ {\star}\right)\right) ^ {2}, \tag {A.104} +$$ + +for some $u_{\delta} > 0$ . Again since $d(B_0^i, B_\star) \to 0$ as $i \to \infty$ , this means for all large enough $i$ , with probability at least $1 - \delta / 2$ , $d(B_0^i, B_\star)$ will be small enough so that: + +$$ +L _ {\mathbf {o o d}} \left(v _ {\mathsf {l p}} ^ {\infty^ {i}}, B _ {0} ^ {i}\right) \leq c _ {\delta} ^ {\prime \prime} \epsilon . \tag {A.105} +$$ + +Taking the ratio: So taking the ratio of the lower bound for fine-tuning, and upper bound for linear probing, we get with with probability at least $1 - \delta$ : + +$$ +\frac {L _ {\text {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty i} , B _ {0} ^ {i}\right)}{\inf _ {t \geq 0} L _ {\text {o o d}} \left(v _ {\mathrm {f t}} ^ {i} (t) , B _ {\mathrm {f t}} ^ {i} (t)\right)} \leq \epsilon , \tag {A.106} +$$ + +as desired. + +![](images/1d36a639a3fb0cd1601ed250a55726d604da50b3647dbeb02894e5d33ca3df11.jpg) + +We now prove the Lemmas that we used in the above proof. + +# A.3.1 CONVERGENCE OF PRINCIPAL ANGLE + +Theorem 3.4 assumes conditions on the angle between the perfect feature extractor $B_{\star}$ and the ID subspace $S$ . However, fine-tuning and linear probing start from features $B_0$ with some error, and do not get access to $B_{\star}$ . We show that if $B_0$ and $B_{\star}$ are close, then the angles between their rowspaces to a third subspace $T$ (which could be the the ID subspace $S$ ) is similar. + +Lemma A.10. Given two feature extractors $B_0, B_\star \in \mathbb{R}^{k \times d}$ with orthonormal rows, where $R_0 = \text{rowspace}(B_0), R_* = \text{rowspace}(B_\star)$ , and a subspace $T$ with dimension at least 1, we have: + +$$ +\left| \cos \theta_ {\max } \left(R _ {0}, T\right) - \cos \theta_ {\max } \left(R _ {*}, T\right) \right| \leq d \left(B _ {0}, B _ {*} \right) \tag {A.107} +$$ + +Proof. Recall that $k = \dim(R_0) = \dim(R_*)$ . Let $r = \min(k, \dim(T))$ and let $F$ be a $d$ -by- $\dim(T)$ matrix with orthonormal columns that form a basis for $T$ . We have, for arbitrary rotation matrix $U \in \mathbb{R}^{k \times k}$ : + +$$ +\begin{array}{l} \cos \theta_ {\max } \left(R _ {0}, T\right) = \sigma_ {r} \left(B _ {0} F\right) (A.108) \\ = \sigma_ {r} \left(U B _ {0} F\right) (A.109) \\ = \sigma_ {r} \left(B _ {\star} F + \left(U B _ {0} - B _ {\star}\right) F\right) (A.110) \\ \geq \sigma_ {r} \left(B _ {\star} F\right) - \sigma_ {1} \left(\left(U B _ {0} - B _ {\star}\right) F\right) (A.111) \\ \geq \sigma_ {r} \left(B _ {\star} F\right) - \sigma_ {1} \left(U B _ {0} - B _ {\star}\right) (A.112) \\ = \sigma_ {r} \left(B _ {\star} F\right) - \| U B _ {0} - B _ {\star} \| _ {2} (A.113) \\ = \cos \theta_ {\max } \left(R _ {*}, T\right) - \| U B _ {0} - B _ {\star} \| _ {2} (A.114) \\ \end{array} +$$ + +Here in the first step we used the definition of $\cos \theta_{\mathrm{max}}$ (Definition 3.1), and the fact that $B_0^\top$ has orthonormal columns which form a basis for $R_0$ (the rowspace of $B_0$ ), so in Definition 3.1 we can substitute $E = B_0^\top$ . To get Equation A.111 we used Weyl's theorem, which bounds the singular value under perturbations: $\sigma_r(A + B) \geq \sigma_r(A) - \sigma_1(B)$ . To get Equation A.112 we used the fact that $\| Fv \|_2 = \| v \|$ since $F$ has orthonormal columns. + +Since this holds for all rotation matrices $U$ , we can take the minimum over $U$ to get: + +$$ +\cos \theta_ {\max } (R _ {0}, T) \geq \cos \theta_ {\max } (R _ {*}, T) - \min _ {U} \| U B _ {0} - B _ {\star} \| _ {2} = \cos \theta_ {\max } (R _ {*}, T) - d \left(B _ {0} ^ {i}, B _ {\star}\right) \tag {A.115} +$$ + +Since the relationship between $B_0$ and $B_\star$ are symmetric (and the distance $d$ is symmetric), this gives us the desired result: + +$$ +\left| \cos \theta_ {\max } \left(R _ {0}, T\right) - \cos \theta_ {\max } \left(R _ {*}, T\right) \right| \leq d \left(B _ {0}, B _ {\star}\right) \tag {A.116} +$$ + +![](images/27cefcedcac14d6c16840cd512c4ea70d0f4a951b9a8143bfbb64600f6454871.jpg) + +Lemma A.11. Given a sequence of pretrained feature extractors $\{B_0^i\}_{i=1}^\infty$ with $B_0^i \to B_\star$ , where $B_0^i, B_\star \in \mathbb{R}^{k \times d}$ have orthonormal rows, let $R^i = \text{rowspace}(B_0^i), R_* = \text{rowspace}(B_\star)$ . Then for any subspace $T$ , we have: + +$$ +\cos \theta_ {\max } \left(R ^ {i}, T\right)\rightarrow \cos \theta_ {\max } \left(R _ {*}, T\right), a s i \rightarrow \infty . \tag {A.117} +$$ + +Proof. This follows directly from Lemma A.10. $B_0^i \to B_\star$ means $d(B_0^i, B_\star) \to 0$ . Then from Lemma A.10: + +$$ +\left| \cos \theta_ {\max } \left(R ^ {i}, T\right) - \cos \theta_ {\max } \left(R _ {*}, T\right)\right|\rightarrow 0, \text {a s} i \rightarrow \infty \tag {A.118} +$$ + +This means $\cos \theta_{\max}(R^i,T)\to \cos \theta_{\max}(R_*,T)$ as $i\rightarrow \infty$ + +# A.3.2 BOUNDING THE HEAD ERROR + +We prove a lower bound on $\operatorname{Head-Error}(v_0, v_\star) = |(v_0^\top v_\star)^2 - (v_\star^\top v_\star)^2|$ , which was a key term in the fine-tuning lower bound (Theorem 3.2). Note that if the head is initialized as $v_0 = 0$ , then $\operatorname{Head-Error}(v_0, v_\star) = \|v_\star\|_2^2 = \|w_\star\|_2^2$ . In practice, the head is usually initialized randomly, for example normally distributed. Intuitively, the head error is still high because we do not know which direction the head is pointing in, so most of the time the initial (randomly sampled) head will be pointing in the wrong direction. If $v_0 \sim N(0, \sigma^2 I)$ can show that for any $\sigma^2$ , the head error will still typically be at least $\Omega(\|v_\star\|_2)$ . This is an illustrative result, one can show similar results for other random initializations as well. + +We first prove an anti-concentration lemma, which says that if $u$ is univariate Gaussian, then it cannot be too close to any particular constant $a$ , no matter how the variance of the Gaussian is chosen. + +Lemma A.12. For some universal constant $c$ , given $a > 0$ , for all $\nu^2$ if $u \sim N(0, \nu^2)$ then for all $0 \leq \delta \leq 1$ : + +$$ +P (| u - a | \leq c \delta a) \leq \delta \tag {A.119} +$$ + +Proof. Consider $\delta$ such that $\delta \leq 1/10$ . Then for all $u$ with $|u - a| \leq \delta a$ , we have $u \geq 9a/10$ . For all $u \geq 9a/10$ , the density $f(u)$ is upper bounded (from the formula for the density of a Gaussian random variable) by: + +$$ +f (u) \leq O \left(\frac {1}{v} \exp \frac {- 9 ^ {2} a ^ {2}}{2 \cdot 1 0 ^ {2} v ^ {2}}\right) \tag {A.120} +$$ + +We can maximize this explicitly (e.g., use Mathematica or by taking the logarithm and then setting the derivative to 0) and we get for some universal constant $c' \geq 10$ (it is OK to choose a larger universal constant than needed): + +$$ +f (u) \leq \frac {c ^ {\prime}}{a} \tag {A.121} +$$ + +Since the density is less than $c' / a$ and if $|u - a| \leq \delta a$ the size of the interval is $2\delta a$ , we get for all $\delta \leq 1/10$ : + +$$ +P (| u - a | \leq \delta a) \leq \frac {2 c ^ {\prime} \delta a}{a} = 2 c ^ {\prime} \delta \tag {A.122} +$$ + +Now, we substitute $\delta' = 2c'\delta$ . We get for all $\delta' \leq 2c'/10$ : + +$$ +P \left(\left| u - a \right| \leq \frac {1}{2 c ^ {\prime}} \delta^ {\prime} a\right) \leq \delta^ {\prime} \tag {A.123} +$$ + +Since $c' \geq 10, 2c'/10 \geq 1$ , so the statement is true for all $0 \leq \delta' \leq 1$ . + +![](images/fc5ed577f65a27e9b9a2c168293352aae9fed82177dcf267881bee43198d6ff2.jpg) + +We now bound the error in the head if the initialization is Gaussian. This bound holds for all initialization variances $\sigma^2$ . Similar bounds can be shown for other (non-Gaussian) head initializations using similar anti-concentration arguments. + +Lemma A.13. For some universal constant $c$ , for all $v_{\star} \in \mathbb{R}^k$ with $v_{\star} \neq 0$ , $\sigma \in \mathbb{R}^{+}$ , $\delta \in [0,1]$ , if $v_0 \sim N(0,\sigma^2 I_k)$ , we have with probability at least $1 - \delta$ : + +$$ +\left(\text {H e a d - E r r o r} \left(v _ {0}, v _ {\star}\right)\right) ^ {2} := \left| \left(v _ {0} ^ {\top} v _ {\star}\right) ^ {2} - \left(v _ {\star} ^ {\top} v _ {\star}\right) ^ {2} \right| \geq c \delta \left(v _ {\star} ^ {\top} v _ {\star}\right) ^ {2} \tag {A.124} +$$ + +Proof. First note that $\operatorname{Head-Error}(v_0, v_\star) = \operatorname{Head-Error}(-v_0, v_\star)$ and $v_0$ is symmetric around $0$ ( $v_0$ and $-v_0$ have the same probability), and is almost surely not exactly $0$ . So without loss of generality, we can suppose that $v_0^\top v_\star \geq 0$ . + +Suffices to bound $|v_0^\top v_\star - v_\star^\top v_\star|$ : We decompose the error: + +$$ +\begin{array}{l} \left| \left(v _ {0} ^ {\top} v _ {\star}\right) ^ {2} - \left(v _ {\star} ^ {\top} v _ {\star}\right) ^ {2} \right| = \left| v _ {0} ^ {\top} v _ {\star} - v _ {\star} ^ {\top} v _ {\star} \right| \left(\left| v _ {0} ^ {\top} v _ {\star} + v _ {\star} ^ {\top} v _ {\star} \right|\right) (A.125) \\ \geq \left| v _ {0} ^ {\top} v _ {\star} - v _ {\star} ^ {\top} v _ {\star} \right| \left(v _ {\star} ^ {\top} v _ {\star}\right) | (A.126) \\ \end{array} +$$ + +So we bound $|v_0^\top v_\star - v_\star^\top v_\star|$ . + +$v_{0}^{\top} v_{\star}$ is normally distributed: We note that $v_{0}^{\top} v_{\star}$ is distributed as: + +$$ +v _ {0} ^ {\top} v _ {\star} \sim N \left(0, \sigma^ {2} v _ {\star} ^ {\top} v _ {\star}\right) \tag {A.127} +$$ + +In other words, a normal with mean 0, and variance $\sigma_1^2 = \sigma^2 v_\star^\top v_\star$ , and therefore standard deviation $\sigma_1 = \sigma \sqrt{v_\star^\top v_\star}$ . + +Apply Gaussian anti-concentration lemma: Then, from Lemma A.12, we have for some universal constant $c$ that with probability at least $1 - \delta$ : + +$$ +\left| v _ {0} ^ {\top} v _ {\star} - v _ {\star} ^ {\top} v _ {\star} \right| \geq c \delta v _ {\star} ^ {\top} v _ {\star} \tag {A.128} +$$ + +So substituting this back into Equation A.125, we get the desired result: + +$$ +\left| \left(v _ {0} ^ {\top} v _ {\star}\right) ^ {2} - \left(v _ {\star} ^ {\top} v _ {\star}\right) ^ {2} \right| \geq c \delta \left(v _ {\star} ^ {\top} v _ {\star}\right) ^ {2} \tag {A.129} +$$ + +![](images/789f5df8a1171b04a2310e319eb6e73d73135b662092d449f4315dfc562573dc.jpg) + +# A.3.3 UPPER BOUNDING LINEAR PROBING ERROR + +We showed a lower bound for the OOD error of fine-tuning in Theorem 3.2. To compare this with linear probing, we prove an upper bound on the OOD error of linear probing. + +For completeness we include an elementary lemma (note that the condition that the matrices are tall is important for composing $\sigma_{\mathrm{min}}$ , unlike for $\sigma_{\mathrm{max}}$ , and we included this lemma to be careful about these conditions): + +Lemma A.14. Suppose we have two matrices $A, B$ of shape $(r,s)$ and $(s,t)$ respectively, and they are tall matrices so $r \geq s \geq t$ . Then we have: + +$$ +\sigma_ {\min } (A B) \geq \sigma_ {\min } (A) \sigma_ {\min } (B) \tag {A.130} +$$ + +Proof. For a tall matrix $A$ , we have: + +$$ +\sigma_ {\min } (A) = \min _ {\| x \| _ {2} \leq 1} \| A x \| _ {2} \tag {A.131} +$$ + +So we have: + +$$ +\sigma_ {\min } (A B) = \min _ {\| x \| _ {2} \leq 1} \| A B x \| _ {2} \geq \sigma_ {\min } (A) \sigma_ {\min } (B) \min _ {\| x \| _ {2} \leq 1} \| x \| _ {2} \tag {A.132} +$$ + +And $\min_{\| x\| _2\leq 1}\| x\| _2 = 1$ which completes the proof. + +![](images/1d3daa855020059a2489d3f3ddeeb6de93b3ef742952fd7d81966f7df09aa22c.jpg) + +Lemma A.15. In the linear overparameterized setting, under the ID subspace assumption, fix arbitrary $P_{z}$ . Then there exists $c_{\delta}$ such that with probability at least $1 - \delta$ , for all $d,n,m,k,w_{\star}$ , feature extractors $B_{\star}, B_{0}$ , and ID subspaces $S$ with corresponding $F$ (whose columns are orthonormal and form a basis for $S$ ), if $\cos \theta_{\max}(S,R) > 0$ , we have: + +$$ +\sqrt {L _ {\mathrm {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty} , B _ {0}\right)} \leq \left(\frac {c _ {\delta}}{\cos \theta_ {\max } (S , R)}\right) ^ {2} d \left(B _ {0}, B _ {\star}\right) \| w * \| _ {2} \tag {A.133} +$$ + +If $P_{z}$ is isotropic Gaussian so $\mathcal{N}(0, I_{m})$ , then we derive a bound for $c_{\delta}$ analytically: if $n \geq 5m$ and $n \geq 10 \log \frac{1}{\delta}$ then with probability at least $1 - \delta$ , the linear probing OOD error is upper bounded by: + +$$ +\sqrt {L _ {\mathrm {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty} , B _ {0}\right)} \leq O \left(\frac {\log (n / \delta)}{\left(\cos \theta_ {\max } (R , S)\right) ^ {2}} d \left(B _ {0}, B _ {\star}\right) \| w _ {\star} \| _ {2}\right) \tag {A.134} +$$ + +Proof. From the ID subspace assumption, the data matrix $X$ of shape $(n,d)$ can be written as $X = ZF^{\top}$ where $Z$ be a matrix of shape $(n,m)$ with each row $Z_{i}$ sampled iid from $P_{z}$ , and $F$ is a matrix of shape $(d,m)$ whose columns are orthonormal and form a basis for the ID subspace $S$ . + +Let $\epsilon = \| B_{\star} - B_0\| _2\leq$ . We first prove the bounds for $\epsilon$ , in terms of $d(B_0,B_\star)$ and we later handle the fact that the feature extractor distance involves the min over rotation matrices $U$ .. $d(B_0,B_\star) = \min_U\| UB_0 - B_\star \| _2$ + +Bounding key singular values: Before proceeding with the proof, we examine a key quantity $XB_0^\top = ZF^\top B_0^\top$ which comes up in the Hessian of the loss function. We will show that this is invertible almost surely, and get a lower bound on its min singular value. + +First, we examine the shapes of the matrices. $ZF^{\top}B_{0}^{\top}$ has shape $(n,d)$ where $Z$ has shape $(n,m)$ and $F^{\top}B_{0}^{\top}$ has shape $(m,k)$ . Since $n\geq m > k$ we have that $Z$ and $F^{\top}B_{0}^{\top}$ are tall matrices, and so from Lemma A.14 we can write the min singular value of $ZF^{\top}B_{0}^{\top}$ as: + +$$ +\sigma_ {\min } \left(Z F ^ {\top} B _ {0} ^ {\top}\right) \geq \sigma_ {\min } (Z) \sigma_ {\min } \left(F ^ {\top} B _ {0} ^ {\top}\right) \tag {A.135} +$$ + +Now from the definition of the principal angle (Definition 3.1), we have: + +$$ +\sigma_ {\min } \left(F ^ {\top} B _ {0} ^ {\top}\right) = \cos \theta_ {\max } (R, S) > 0. \tag {A.136} +$$ + +Since we assumed $P_{z}$ has density in the ID subspace assumption, from Lemma 3 in Xie et al. (2021a) we get that for some $c_{\delta}^{\prime} > 0$ that depends on $\delta$ and $P_{z}$ , with probability at least $1 - \delta$ : + +$$ +\sigma_ {\min } (Z) \geq c _ {\delta} ^ {\prime} \tag {A.137} +$$ + +Note that this also means that $\sigma_{\mathrm{min}}(ZF^{\top}B_0^{\top}) > 0$ and so $XB_0^\top = ZF^\top B_0^\top$ has full rank $k$ almost surely. This also implies that $B_0X^\top XB_0^\top$ is a matrix of shape $(k,k)$ that is invertible almost surely. + +Main proof Since $B_0X^\top XB_0^\top$ is invertible almost surely, there is a unique global minimum (minimizing over $v$ ) to the loss optimized by linear probing: + +$$ +\underset {v} {\operatorname {a r g m i n}} \| X B _ {0} ^ {\top} v - X B _ {\star} ^ {\top} v _ {\star} \| _ {2} ^ {2} = \left(B _ {0} X ^ {\top} X B _ {0} ^ {\top}\right) ^ {- 1} B _ {0} X ^ {\top} X B _ {\star} ^ {\top} v _ {\star} \tag {A.138} +$$ + +We can see this by noting that the loss function on the LHS is strongly convex in $v$ since the Hessian $B_0X^\top XB_0^\top$ is invertible. Then, gradient flow converges to the unique minimizer on the RHS, so: + +$$ +v _ {\mathrm {l p}} ^ {\infty} = \left(B _ {0} X ^ {\top} X B _ {0} ^ {\top}\right) ^ {- 1} B _ {0} X ^ {\top} X B _ {\star} ^ {\top} v _ {\star} \tag {A.139} +$$ + +We now bound the square-root OOD error (taking the square root makes it easier to apply triangle inequalities), starting with the definition: + +$$ +\sqrt {L _ {\circ \circ \mathrm {d}} \left(v _ {\mathrm {l p}} ^ {\infty} , B _ {0}\right)} = \| B _ {\star} ^ {\top} v _ {\star} - B _ {0} ^ {\top} v _ {\mathrm {l p}} ^ {\infty} \| _ {2} \tag {A.140} +$$ + +$$ +\leq \left\| \left(B _ {\star} ^ {\top} v _ {\star} - B _ {0} ^ {\top} v _ {\star}\right) + \left(B _ {0} ^ {\top} v _ {\star} - B _ {0} ^ {\top} v _ {\mathrm {l p}} ^ {\infty}\right) \right\| _ {2} \tag {A.141} +$$ + +$$ +\leq \underbrace {\left\| B _ {\star} ^ {\top} v _ {\star} - B _ {0} ^ {\top} v _ {\star} \right\| _ {2}} _ {(1)} + \underbrace {\left\| B _ {0} ^ {\top} v _ {\star} - B _ {0} ^ {\top} v _ {\mathrm {l p}} ^ {\infty}\right) \| _ {2}} _ {(2)} \tag {A.142} +$$ + +We bound each term on the RHS of the last line. For term (1): + +$$ +\left\| B _ {\star} ^ {\top} v _ {\star} - B _ {0} ^ {\top} v _ {\star} \right\| _ {2} \leq \sigma_ {\max } \left(B _ {\star} - B _ {0}\right) \| v _ {\star} \| _ {2} \tag {A.143} +$$ + +$$ +\leq \epsilon \| v _ {\star} \| _ {2} \tag {A.144} +$$ + +$$ += \epsilon \| w _ {\star} \| _ {2}. \tag {A.145} +$$ + +Where we note that $\| v_{\star}\| _2 = \| w_{\star}\| _2$ because $w_{\star} = B_{\star}^{\top}v_{\star}$ where the rows of $B_{\star}$ (columns of $B_{\star}^{\top}$ ) are orthonormal. + +Let $\Sigma = X^{\top}X$ . For term (2), we first substitute $v_{|\mathfrak{p}}^{\infty}$ and do some algebra (again noting that $\| v_{\star}\| _2 = \| w_{\star}\| _2$ ) to get: + +$$ +\left\| B _ {0} ^ {\top} v _ {\star} - B _ {0} ^ {\top} v _ {\mathbf {l p}} ^ {\infty} \right\| _ {2} = \left\| B _ {0} ^ {\top} \left(B _ {0} \Sigma B _ {0} ^ {\top}\right) ^ {- 1} B _ {0} \Sigma B _ {0} ^ {\top} v _ {\star} - B _ {0} ^ {\top} v _ {\mathbf {l p}} ^ {\infty} \right\| _ {2} \tag {A.146} +$$ + +$$ += \left\| B _ {0} ^ {\top} \left(B _ {0} \Sigma B _ {0} ^ {\top}\right) ^ {- 1} B _ {0} \Sigma \left(B _ {0} - B _ {\star}\right) ^ {\top} v _ {\star} \right\| _ {2} \tag {A.147} +$$ + +$$ +\leq \sigma_ {\max } \left(B _ {0} ^ {\top} \left(B _ {0} \Sigma B _ {0} ^ {\top}\right) ^ {- 1} B _ {0} \Sigma\right) \sigma_ {\max } \left(B _ {0} - B _ {\star}\right) \| w _ {\star} \| _ {2} \tag {A.148} +$$ + +$$ +\leq \sigma_ {\max } \left(B _ {0} ^ {\top} \left(B _ {0} \Sigma B _ {0} ^ {\top}\right) ^ {- 1} B _ {0} \Sigma\right) \epsilon \| w _ {\star} \| _ {2} \tag {A.149} +$$ + +$$ +\leq \sigma_ {\max } \left(B _ {0}\right) ^ {2} \sigma_ {\max } (\Sigma) \frac {1}{\sigma_ {\min } \left(B _ {0} \Sigma B _ {0} ^ {\top}\right)} \epsilon \| w _ {\star} \| _ {2} \tag {A.150} +$$ + +$$ +\leq \frac {\sigma_ {\max } \left(B _ {0}\right) ^ {2} \sigma_ {\max } (X) ^ {2}}{\sigma_ {\min } \left(X B _ {0} ^ {\top}\right) ^ {2}} \epsilon \| w _ {\star} \| _ {2} \tag {A.151} +$$ + +$$ += \frac {\sigma_ {\max } \left(B _ {0}\right) ^ {2} \sigma_ {\max } \left(Z F ^ {\top}\right) ^ {2}}{\sigma_ {\min } \left(Z F ^ {\top} B _ {0} ^ {\top}\right) ^ {2}} \epsilon \| w _ {\star} \| _ {2} \tag {A.152} +$$ + +$$ +\leq \frac {\sigma_ {\max } (B _ {0}) ^ {2} \sigma_ {\max } (Z) ^ {2}}{\sigma_ {\min } (Z) ^ {2} (\cos \theta_ {\max } (R , S)) ^ {2}} \epsilon \| w _ {\star} \| _ {2} \tag {A.153} +$$ + +Where in the first line we substituted in the closed form for $v_{\mathrm{lp}}^{\infty}$ from Equation A.138, and in the last line we used the fact that $\sigma_{\max}(ZF^{\top}) \leq \sigma_{\max}(Z)$ since $F^{\top}$ has orthonormal rows, and $\sigma_{\min}(ZF^{\top}B^{\top}) = \sigma_{\min}(Z)\cos \theta_{\max}(R,S)$ as explained in Equation A.135 and Equation A.136. + +So it suffices to bound the quantities in the RHS. Since $B_0$ has orthonormal rows, $\sigma_{\max}(B_0) = 1$ . + +No Gaussian assumption: For the first part of the Theorem (Equation A.133 where we make no Gaussian assumptions, but give a less quantitative bound), we just use the fact that $\sigma_{\max}(Z)$ is upper bounded almost surely, and $\sigma_{\min}(Z) \geq c_{\delta}'$ with probability at least $1 - \delta$ . This implies that for some $c_{\delta} > 0$ with probability at least $1 - \delta$ : + +$$ +\sqrt {L _ {\mathrm {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty} , B _ {0}\right)} \leq \left(\frac {c _ {\delta}}{\cos \theta_ {\max } (S , R)}\right) ^ {2} \epsilon | | w * | | _ {2}, \tag {A.155} +$$ + +where $\epsilon = \| B_0 - B_\star \| _2$ + +Gaussian assumption: For the second part of the Theorem (Equation A.134 where we assume $P_{z}$ is Gaussian), we use results in random matrix theory to lower bound and upper bound $\sigma_{\mathrm{min}}(Z)$ . For the lower bound we use a result from Rudelson & Vershynin (2009) (see page 4, in the equation below Equation 1.11), since $Z \in \mathbb{R}^{n \times m}$ is a matrix with each entry sampled from $\mathcal{N}(0,1)$ , we get for all $t > 0$ : + +$$ +\mathbb {P} \left(\sigma_ {\min } (Z) \leq \sqrt {n} - \sqrt {m} - t\right) \leq e ^ {- t ^ {2} / 2} \tag {A.156} +$$ + +With a bit of algebra, this gives us that with probability at least $1 - \delta$ : + +$$ +\sigma_ {\min } (Z) \geq \sqrt {n} - \sqrt {m} - \sqrt {2 \log \frac {1}{\delta}} \tag {A.157} +$$ + +We assumed $n \geq 5m$ and $n \geq 10\log \frac{1}{\delta}$ , so we get: + +$$ +\sigma_ {\min } (Z) \geq O (\sqrt {n}) \tag {A.158} +$$ + +The upper bound is a standard matrix concentration bound—we use the high probability bound in Theorem 4.1.1 from Tropp (2015) (see Section 4.2.2 which calculates the variance statistic for rectangular Gaussian matrices, also notice the square on the LHS below): + +$$ +\sigma_ {\max } (Z) ^ {2} \leq O \left(n \log \frac {n}{\delta}\right) \tag {A.159} +$$ + +Substituting the lower and upper bounds on $\sigma_{\mathrm{min}}(Z)$ into Equation A.146 we get: + +$$ +\left\| B _ {0} ^ {\top} v _ {\star} - B _ {0} ^ {\top} v _ {\mathrm {l p}} ^ {\infty} \right\| _ {2} \leq O \left(\frac {\log (n / \delta)}{(\cos \theta_ {\max } (R , S)) ^ {2}} \epsilon \| w _ {\star} \| _ {2}\right) \tag {A.160} +$$ + +Substituting into equation A.140, we have: + +$$ +\sqrt {L _ {\mathrm {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty} , B _ {0}\right)} \leq O \left(\frac {\log (n / \delta)}{\left(\cos \theta_ {\max } (R , S)\right) ^ {2}} \epsilon \| w _ {\star} \| _ {2}\right), \tag {A.161} +$$ + +where $\epsilon = \| B_0 - B_\star\|_2$ . Which completes the proof of the second part (Equation A.134). + +Handling the rotation matrix $U$ : We now handle the fact that the feature extractor distance involves the min over rotation matrices $U$ : $d(B_0, B_\star) = \min_U \|UB_0 - B_\star\|_2$ . Let $v_{\mathsf{lp}}^\infty(B_0)$ denote the linear probing head solution if we use a pretrained feature extractor $B_0$ . We first note that for any $k$ -by- $k$ rotation matrix $U$ , we have: + +$$ +L _ {\mathbf {o o d}} \left(v _ {\mathbf {l p}} ^ {\infty} \left(B _ {0}\right), B _ {0}\right) = L _ {\mathbf {o o d}} \left(v _ {\mathbf {l p}} ^ {\infty} \left(U B _ {0}\right), U B _ {0}\right). \tag {A.162} +$$ + +This follows from using the closed form we derived above for $v_{\mathfrak{lp}}^{\infty}(B_0)$ and some simple algebraic manipulation (e.g., recall that $U^{-1} = Y^{\top}$ ): + +$$ +\begin{array}{l} \left(U B _ {0}\right) ^ {\top} v _ {\mathrm {l p}} ^ {\infty} \left(U B _ {0}\right) = \left(U B _ {0}\right) ^ {\top} \left(U B _ {0} X ^ {\top} X B _ {0} ^ {\top} U ^ {\top}\right) ^ {- 1} U B _ {0} X ^ {\top} X B _ {\star} ^ {\top} v _ {\star} (A.163) \\ = B _ {0} ^ {\top} U ^ {\top} U \left(B _ {0} X ^ {\top} X B _ {0} ^ {\top}\right) ^ {- 1} U ^ {\top} U B _ {0} X ^ {\top} X B _ {\star} ^ {\top} v _ {\star} (A.164) \\ = B _ {0} ^ {\top} \left(U ^ {\top} U\right) \left(B _ {0} X ^ {\top} X B _ {0} ^ {\top}\right) ^ {- 1} \left(U ^ {\top} U\right) B _ {0} X ^ {\top} X B _ {\star} ^ {\top} v _ {\star} (A.165) \\ = B _ {0} ^ {\top} \left(B _ {0} X ^ {\top} X B _ {0} ^ {\top}\right) ^ {- 1} B _ {0} X ^ {\top} X B _ {\star} ^ {\top} v _ {\star} (A.166) \\ = B _ {0} ^ {\top} v _ {\mathbf {l p}} ^ {\infty} \left(B _ {0}\right) (A.167) \\ \end{array} +$$ + +So the final predictors in both cases, $(UB_0)^\top v_{\mathfrak{lp}}^\infty (UB_0)$ and $B_0^\top v_{\mathfrak{lp}}^\infty (B_0)$ are identical. This means that the OOD error $L_{\mathrm{odd}}(v,B) = \| B^{\top}v - B_{\star}^{\top}v_{\star}\| _2$ is the same in both cases. + +This means that we can just take the min over all rotation matrices $U$ (where the first step follows since the identity matrix is a rotation matrix, and the second step is from Equation A.155): + +$$ +\begin{array}{l} L _ {\mathrm {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty} (B _ {0}), B _ {0}\right) \leq \min _ {U} L _ {\mathrm {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty} \left(U B _ {0}\right), U B _ {0}\right) (A.168) \\ \leq \min _ {U} \left(\frac {c (\delta)}{\cos \theta_ {\max } (S , R)}\right) ^ {2} \| U B _ {0} - B _ {\star} \| _ {2} \| w * \| _ {2} ^ {2} (A.169) \\ = \left(\frac {c (\delta)}{\cos \theta_ {\max } (S , R)}\right) ^ {2} d \left(B _ {0}, B _ {\star}\right) \| w * \| _ {2} ^ {2}, (A.170) \\ \end{array} +$$ + +which is as desired. We repeat the same thing for Equation A.161 to get Equation A.134 in the Theorem statement. + +# A.4 LP VS. FT (OOD), NON-ASYMPTOTIC RESULT FOR GAUSSIAN COVARIATES + +Theorem 3.4 showed an asymptotic result: if the error $d(B_0, B_\star) \to 0$ , then linear probing (LP) achieves better out-of-distribution (OOD) error than fine-tuning (FT). Here we give a more quantitative version of Theorem 3.4 for Gaussian covariates. The result can be extended to the case where each entry of $P_z$ is independent and identically distributed, mean-zero, constant non-zero variance, but instead of Gaussian is sub-Gaussian with constant sub-Gaussian variance / moment—this can be shown using Theorem 1.1 in Rudelson & Vershynin (2009), which is a different matrix concentration inequality. + +We show that LP does better than FT out-of-distribution if the error is less than a specific quantity (in terms of the representation dimension $k$ , and the angles between the ID subspace $S$ and the important pretrained directions $R_{*} = \mathrm{rowspace}(B_{\star})$ ). + +Theorem A.16. In the linear overparameterized setting, under the ID subspace assumption, assume the non-degeneracy conditions $\cos \theta_{\max}(R_*,S) > 0$ and $\cos \theta_{\max}(R_*,S^{\perp}) > 0$ where $R_{*} =$ rowspace $(B_{\star})$ . Suppose the covariates are generated from a Gaussian distribution on the ID subspace $S$ , so $P_{z} = \mathcal{N}(0,I_{m})$ . Let $\| w_{\star}\|_{2}$ be a fixed constant. Given failure probability $1\leq \delta >0$ , for all $w_{\star},B_{0},n,d,k,\epsilon ,ifn\geq 5m$ and $n\geq 10\log \frac{1}{\delta}$ , if the error of the pretrained representation is not too high: + +$$ +d \left(B _ {0}, B _ {\star}\right) < O \left(\frac {\cos \theta_ {\max } \left(R _ {*} , S ^ {\perp}\right) \left(\cos \theta_ {\max } \left(R _ {*} , S\right)\right) ^ {2} \delta^ {2}}{\sqrt {k} \log (n / \delta)}\right), \tag {A.171} +$$ + +then with probability at least $1 - \delta$ , the OOD error of linear probing is lower (better) than for fine-tuning at all time steps $t \geq 0$ in the fine-tuning trajectory: + +$$ +L _ {\mathbf {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty i}, B _ {0} ^ {i}\right) < \inf _ {t \geq 0} L _ {\mathbf {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty i}, B _ {0} ^ {i}\right). \tag {A.172} +$$ + +Proof. Let $\epsilon = d(B_0, B_\star)$ . We first note that the condition in Equation A.171 implies that $d(B_0, B_\star) < O(\cos \theta_{\max}(R_*, S^\perp))$ and $d(B_0, B_\star) < O(\cos \theta_{\max}(R_*, S))$ . This is because the cosine angles are between 0 and 1, $\delta$ is between 0 and 1, and $k$ and $n$ are at least 1. We now simplify and combine the linear probing and fine-tuning bounds. + +Let $R_0 = \mathrm{rowspace}(B_0)$ . Warning: note that the Equation A.171 in the Theorem statement assumes conditions on the angles between $R_*$ (corresponding to the optimal representation) and the ID subspace $S$ . However, our results that bounded the fine-tuning (Theorem 3.2) and linear probing (Lemma A.134) errors require conditions on the angles between $R_0$ (corresponding to the representation that linear probing and fine-tuning use) and $S$ . So we have to be careful about this distinction, and use Lemma A.10 to relate the two, which we do below. + +Fine-tuning: From Theorem 3.2, we get: + +$$ +\sqrt {L _ {\mathrm {o o d}} \left(v _ {\mathrm {f t}} (t) , B _ {\mathrm {f t}} (t)\right)} \geq O \left(\frac {\cos \theta_ {\max } \left(R _ {0} , S ^ {\perp}\right)}{\sqrt {k}} \frac {\min \left(\varphi , \varphi^ {2} / \| w _ {\star} \| _ {2}\right)}{\left(1 + \| w _ {\star} \| _ {2}\right) ^ {2}}\right) - \epsilon . \tag {A.173} +$$ + +Where $\varphi$ is the head-error, which we lower bounded in Lemma A.13—substituting this bound and noting that $\min (\varphi ,\varphi^2) = O(\varphi^2)$ , $\| v_{\star}\| _2 = \| w_{\star}\| _2$ (which we assumed is a constant), this gives us: + +$$ +\sqrt {L _ {\mathrm {o o d}} \left(v _ {\mathrm {f t}} (t) , B _ {\mathrm {f t}} (t)\right)} \geq O \left(\frac {\cos \theta_ {\max } \left(R _ {0} , S ^ {\perp}\right)}{\sqrt {k}} \delta^ {2}\right) - \epsilon \tag {A.174} +$$ + +Now, since $d(B_0,B_\star) = \epsilon$ , we use Lemma A.10 to get that: + +$$ +\cos \theta_ {\max } \left(R _ {0}, S ^ {\perp}\right) \geq \cos \theta_ {\max } \left(R _ {*}, S ^ {\perp}\right) - \epsilon \tag {A.175} +$$ + +Substituting this into Equation A.174, we get (notice the $R_{*}$ instead of $R_{0}$ below): + +$$ +\sqrt {L _ {\mathrm {o o d}} \left(v _ {\mathrm {f t}} (t) , B _ {\mathrm {f t}} (t)\right)} \geq O \left(\frac {\cos \theta_ {\max } \left(R _ {*} , S ^ {\perp}\right) - \epsilon}{\sqrt {k}} \delta^ {2}\right) - \epsilon \tag {A.176} +$$ + +Since $\epsilon \leq O(\cos \theta_{\mathrm{max}}(R_*, S^\perp))$ , this can be simplified to: + +$$ +\sqrt {L _ {\mathrm {o o d}} \left(v _ {\mathrm {f t}} (t) , B _ {\mathrm {f t}} (t)\right)} \geq O \left(\frac {\cos \theta_ {\max } \left(R _ {*} , S ^ {\perp}\right)}{\sqrt {k}} \delta^ {2}\right) - \epsilon \tag {A.177} +$$ + +Linear probing: From Lemma A.134, we get: + +$$ +\sqrt {L _ {\mathrm {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty} , B _ {0}\right)} \leq O \left(\frac {\log (n / \delta)}{\left(\cos \theta_ {\max } \left(R _ {0} , S\right)\right) ^ {2}} \epsilon \| w _ {\star} \| _ {2}\right) \tag {A.178} +$$ + +Again, we use Lemma A.10 to get: + +$$ +\cos \theta_ {\max } \left(R _ {0}, S\right) \geq \cos \theta_ {\max } \left(R _ {*}, S\right) - \epsilon \tag {A.179} +$$ + +Substituting into Equation A.178, and using the fact that $\epsilon \leq O(\cos \theta_{\max}(R_*, S))$ , and since we assumed $\| w_\star \|_2$ is a constant, we get: + +$$ +\sqrt {L _ {\mathrm {o o d}} \left(v _ {\mathrm {l p}} ^ {\infty} , B _ {0}\right)} \leq O \left(\frac {\log (n / \delta)}{\left(\cos \theta_ {\max } \left(R _ {*}, S\right)\right) ^ {2}} \epsilon\right) \tag {A.180} +$$ + +Combining the two: We want to show that the OOD error of LP is less than for fine-tuning: + +$$ +O \left(\frac {\log (n / \delta)}{(\cos \theta_ {\max } (R _ {*} , S)) ^ {2}} \epsilon\right) \leq O \left(\frac {\cos \theta_ {\max } (R _ {*} , S ^ {\perp})}{\sqrt {k}} \delta^ {2}\right) - \epsilon \tag {A.181} +$$ + +We can bring the $\epsilon$ to the LHS, so this is equivalent to showing: + +$$ +O \left(\frac {\log (n / \delta)}{(\cos \theta_ {\max } (R _ {*} , S)) ^ {2}} \epsilon\right) + \epsilon \leq O \left(\frac {\cos \theta_ {\max } (R _ {*} , S ^ {\perp})}{\sqrt {k}} \delta^ {2}\right) \tag {A.182} +$$ + +Since $\log (n / \delta)\geq 1$ and $\cos \theta_{\mathrm{max}}(R_*,S))^2$ is between 0 and 1, this is equivalent to folding the $\epsilon$ inside the big-oh on the LHS: + +$$ +O \left(\frac {\log (n / \delta)}{(\cos \theta_ {\max } \left(R _ {*} , S\right)) ^ {2}} \epsilon \| w _ {\star} \| _ {2}\right) \leq O \left(\frac {\cos \theta_ {\max } \left(R _ {*} , S ^ {\perp}\right)}{\sqrt {k}} \delta^ {2}\right) \tag {A.183} +$$ + +But assuming the condition on $\epsilon$ in Equation A.171 of the Theorem statement, this is easy to show with a bit of algebra. + +# A.5 PRINCIPAL ANGLES ARE LIKELY NON-ZERO + +In Theorems 3.2, 3.4, and 3.5, we assumed the cosine of the largest principal angle between the representations and ID subspace (or complement of the ID subspace) was non-zero. For example, Theorem 3.4 assumed the largest principal angle between $R_{*} = \mathrm{rowspace}(B_{\star})$ and the ID subspace $S$ is non-zero, and similarly for the angle between $R_{*}$ and $S^{\perp}$ . Having an angle of 0 is a degenerate condition. As an example, look at Figure 2—here the input dimension $d = 2$ , the representation dimension $k = 1$ , and the ID subspace $S$ has dimension 1. The only way these angles can be 0 is if $B_{\star}^{\top}$ is exactly in the same direction as $S$ or $S^{\perp}$ , which seems like too much of a coincidence. intuitively, if nature introduces even a small amount of randomness in either the optimal representation or ID subspace, the angle will be non-zero. + +This example was in two dimensions—to make this intuition a bit more formal in higher dimensions, we prove a simple claim. Lemma A.17 shows that if the $S$ is a randomly selected $m$ dimensional subspace, then the angles $\cos\theta_{\max}(R_*, S)$ and $\cos\theta_{\max}(R_*, S^{\perp})$ are non-zero (and we get quantitative lower bounds on them). + +Lemma A.17. Let $R$ be a fixed $k$ dimensional subspace, and let $S$ be a uniformly random $m$ dimensional subspace (formally, a uniform measure on the Grassmannian manifold) in $\mathbb{R}^d$ with $m > k$ . Then with probability at least $1 - \delta$ , + +$$ +\cos \theta_ {\max } (R, S) \geq \frac {\sqrt {m} - \sqrt {k} - \sqrt {2 \log \frac {1}{\delta}}}{\sqrt {d \log \frac {2 d}{\delta}}} \tag {A.184} +$$ + +In addition, we get that $\cos \theta_{\mathrm{max}}(R,S) > 0$ almost surely (with probability 1). + +If $m \geq 5k$ and $m \geq 10\log \frac{1}{\delta}$ , then we get with probability at least $1 - \delta$ : + +$$ +\cos \theta_ {\max } (R, S) \geq O \left(\sqrt {\frac {m}{d \log \frac {2 d}{\delta}}}\right) \tag {A.185} +$$ + +Recall that big-oh notation here means that the RHS is true for some universal constant (independent of any other problem parameters). + +Proof. Note that principal angles are invariant if we rotate $R$ and $S$ by the same rotation matrix $U$ . That is, if we let $U \in R^{d \times d}$ be a rotation matrix, and $E \in \mathbb{R}^{d \times k}$ , $F \in \mathbb{R}^{d \times m}$ have orthonormal columns which form a basis for $R$ and $S$ respectively, then we have: + +$$ +\cos \theta_ {\max } (R, S) = \sigma_ {k} \left(E ^ {\top} F\right) = \sigma_ {k} \left(\left(U E\right) ^ {\top} \left(U F\right)\right) \tag {A.186} +$$ + +This symmetry means that we can fix $S$ and instead consider $R$ to be a uniform random $k$ dimensional subspace on the Grassmannian manifold. Without loss of generality, we can also fix $S$ to be the span of the first $m$ standard basis vectors: $(e_1, \ldots, e_m)$ , where $e_i \in \mathbb{R}^d$ has a 1 in the $i$ -th entry and a 0 in every other entry. + +Equivalently, let $M_R$ be a $d$ -by- $k$ matrix, where each column is sampled independently from $N(0, I_d)$ since the columns of $M_R$ span a uniformly random $k$ -dimensional subspace, we can let $R$ be range of $M_R$ . This is equivalent to sampling each entry of $M_R$ from $N(0, 1)$ . + +Let $c = \cos \theta_{\mathrm{max}}(R,S)$ . From Lemma A.2, $c$ can be written as: + +$$ +c = \min _ {r \in R, \| r \| _ {2} = 1} \| F ^ {\top} r \| _ {2} = \min _ {r \in R, \| r \| _ {2} \geq 1} \| F ^ {\top} r \| _ {2} \tag {A.187} +$$ + +Since $R$ is the range of $M_R$ , any $r \in R$ can be written as $r = M_R \lambda$ for some $\lambda \in \mathbb{R}^k$ . We first show that $\| \lambda \|_2$ cannot be much smaller than $\| r \|_2$ . This is because: + +$$ +\| r \| _ {2} = \| M _ {R} \lambda \| _ {2} \leq \sigma_ {\max } (M _ {R}) \| \lambda \| _ {2} \tag {A.188} +$$ + +So this gives us: + +$$ +\| \lambda \| _ {2} \geq \frac {\| r \| _ {2}}{\sigma_ {\max } \left(M _ {R}\right)} \tag {A.189} +$$ + +So every $r \in R$ can be written as $M_R \lambda$ where $\| \lambda \|_2$ is lower bounded as above. + +We now simplify the definition of $c$ , starting from Equation A.187. + +$$ +\begin{array}{l} c = \min _ {r \in R, \| r \| _ {2} \geq 1} \| F ^ {\top} r \| _ {2} (A.190) \\ \geq \min _ {\| \lambda \| \geq 1 / \sigma_ {\max } \left(M _ {R}\right)} \| F ^ {\top} M _ {R} \lambda \| _ {2} (A.191) \\ \geq \min _ {\| \lambda \| \geq 1 / \sigma_ {\max } \left(M _ {R}\right)} \sigma_ {\min } \left(F ^ {\top} M _ {R}\right) \| \lambda \| _ {2} (A.192) \\ = \frac {\sigma_ {\min } \left(F ^ {\top} M _ {R}\right)}{\sigma_ {\max } \left(M _ {R}\right)} (A.193) \\ \end{array} +$$ + +So now we want to lower bound the ratio of two random matrices. We note that $F^{\top}M_{R}$ is a matrix of size $(m,k)$ with each entry sampled independently from $N(0,1)$ (this is because $F^{\top}$ simple selects the first $m$ rows of $M_{R}$ ). $M_{R}$ is a matrix of size $(d,k)$ with each entry sampled independently from $N(0,1)$ . + +Now, as in the Gaussian assumption step of the proof of Lemma A.15, we can apply standard matrix concentration bounds (page 4, below Equation 1.11, in Rudelson & Vershynin (2009) for the bound on $\sigma_{\mathrm{min}}$ , and Theorem 4.1.1 in Tropp (2015) for the bound on $\sigma_{\mathrm{max}}$ ). We get that with probability at least $1 - \delta$ : + +$$ +\sigma_ {\min } \left(F ^ {\top} M _ {R}\right) \geq \sqrt {m} - \sqrt {k} - \sqrt {2 \log \frac {1}{\delta}} \tag {A.194} +$$ + +$$ +\sigma_ {\max } \left(M _ {R}\right) \leq \sqrt {d \log \frac {2 d}{\delta}} \tag {A.195} +$$ + +Note that we can use alternate bounds for $\sigma_{\mathrm{min}}$ in Rudelson & Vershynin (2009) that are sometimes tighter. + +For the ratio of the two, we get that with probability at least $1 - \delta$ , we have: + +$$ +c \geq \frac {\sigma_ {\min } \left(F ^ {\top} M _ {R}\right)}{\sigma_ {\max } \left(M _ {R}\right)} \geq \frac {\sqrt {m} - \sqrt {k} - \sqrt {2 \log \frac {2}{\delta}}}{\sqrt {d \log \frac {2 d}{\delta}}} \tag {A.196} +$$ + +For interpretability, ignoring log factors this is approximately: + +$$ +c \gtrsim \frac {\sqrt {m} - \sqrt {k}}{\sqrt {d}} \tag {A.197} +$$ + +The result when $m \geq 5k$ and $n \geq 10\log \frac{2}{\delta}$ follows with simple algebra. + +For the result where we show $\cos\theta_{\max}(R,S) > 0$ almost surely, we recall that $F^{\top}M_R$ is a matrix of size $(m,k)$ with each entry sampled independently from $N(0,1)$ . Then applying Lemma 3 in Xie et al. (2021a), we get that $\sigma_{\min}(F^{\top}M_R) > 0$ almost surely. Since $\sigma_{\max}(M_R)$ is finite, this gives us $\cos\theta_{\max}(R,S) > 0$ almost surely. + +![](images/26d5d9b3863c1d9c31e0db565abd73ed51cf68a2ae51de48d9d3f3b16710221a.jpg) + +In our case, the dimension of the ID subspace $S$ is $m$ , and the dimension of $R_{*} = \mathrm{rowspace}(B_{\star})$ is $k$ , with $k < m$ and $k < d - m$ . If $S$ is a uniformly random $m$ -dimensional subspace, then $S^{\perp}$ is a uniformly random $d - m$ dimensional subspace. In this case, Lemma A.17 tells us that $\cos \theta_{\max}(R_{*}, S) > 0$ and $\cos \theta_{\max}(R_{*}, S^{\perp}) > 0$ almost surely, and gives us quantitative lower bounds for these angles. + +# A.6 LP vs. FT (ID) + +We prove Proposition 3.5, where we show that if the representation is imperfect, then fine-tuning does better than linear probing, in-distribution. + +Restatement of Proposition 3.5. In the linear overparameterized setting, under the ID subspace assumption (Assumption 3.3), let $R_0 = \text{rowspace}(B_0)$ , and $R_{\mathrm{aug}} = \text{Span}(\{w_\star\} \cup R_0)$ . Suppose $w_\star \notin R_0$ , $\cos \theta_{\max}(S, R_{\mathrm{aug}}) \neq 0$ , and that fine-tuning converges to a local minimum of its loss, then fine-tuning does better if $L_{\mathrm{id}}(v_{\mathrm{ft}}^\infty, B_{\mathrm{ft}}^\infty) < L_{\mathrm{id}}(v_{\mathrm{lp}}^\infty, B_0)$ with probability 1 (over the randomness of the training examples). + +Proof. Fine-tuning gets 0 ID loss: It is well known from prior work (Laurent & von Brecht, 2018) that all local minima are global for optimizing two layer linear networks under convex losses (which is our setting), so if fine-tuning converges to a local minimum, it actually converges to a global minimum of the train loss. Since there exists parameters that achieve 0 loss on the training data (namely, $B_{\star}, v_{\star}$ ), this means fine-tuning gets 0 loss on the training data as well. So for all training examples $x$ (that is, rows of $X$ ): + +$$ +v _ {\mathrm {f t}} ^ {\infty \top} B _ {\mathrm {f t}} ^ {\infty} x = w _ {\star} ^ {\top} x. \tag {A.198} +$$ + +Since the models are linear, this implies that fine-tuning gets all examples in the span of the training examples correct as well. Since $P_{z}$ has density, and the number of training examples $n$ is at least as large as the ID subspace dimension $m$ , the training examples span the ID subspace almost surely, so fine-tuning gets every example in $x \in S$ correct almost surely, giving us: + +$$ +L _ {\mathrm {i d}} \left(v _ {\mathrm {f t}} ^ {\infty}, B _ {\mathrm {f t}} ^ {\infty}\right) = 0 \tag {A.199} +$$ + +Linear probing gets positive ID loss: Lemma A.20 shows that the ID error of linear probing is greater than zero under the same assumptions as this Proposition, so + +$$ +L _ {\mathrm {i d}} \left(v _ {\mathrm {l p}} ^ {\infty}, B _ {0}\right) > 0, \tag {A.200} +$$ + +which finishes the proof. + +![](images/6e65631279ce842c544fd7ac532f31e193787e9c590a5cb62ad7a1dcf6f38b1c.jpg) + +We now state and prove the Lemmas that we used to lower bound the ID error of linear probing. + +Lemma A.18 gives conditions for when the projection $F^{\top}w$ of a vector $w$ is not contained in the projection $\mathrm{Range}(F^{\top}E_0)$ of the column space of a matrix $E_0$ . + +Lemma A.18. Let $w \in \mathbb{R}^d$ be a vector and $F \in \mathbb{R}^{d \times m}$ , $E_0 \in \mathbb{R}^{d \times k}$ , $E_{\mathrm{aug}} \in \mathbb{R}^{d \times (k + 1)}$ have orthonormal columns, with $\text{Range}(E_{\mathrm{aug}}) = \text{Span}(\{w\} \cup \text{Range}(E_0))$ . If $m > k$ , we have: + +$$ +F ^ {\top} E _ {\text {a u g}} \text {i s f u l l r a n k} \tag {A.201} +$$ + +$$ +\xrightarrow {(a)} F ^ {\top} E _ {\text {a u g}} \text {h a s h i g h e r r a n k t h a n} F ^ {\top} E _ {0} \tag {A.202} +$$ + +$$ +\stackrel {(b)} {\iff} F ^ {\top} w \notin \operatorname {R a n g e} \left(F ^ {\top} E _ {0}\right) \tag {A.203} +$$ + +Proof. The proof of (a) is clear— $F^{\top}E_{\mathrm{aug}} \in \mathbb{R}^{m \times (k + 1)}$ has rank $k + 1$ (since it is full rank and $m \geq k + 1$ ), but $F^{\top}E_{\mathrm{aug}} \in \mathbb{R}^{m \times k}$ has rank at most $k$ and is therefore lower rank. The assumption that $m > k$ is crucial here. + +For (b), let $a_1, \ldots, a_k$ be the columns of $E_0$ , which form a basis for $\operatorname{Range}(E_0)$ . Then $F^\top a_1, \ldots, F^\top a_k, F^\top w$ spans $\operatorname{Range}(F^\top E_{\mathrm{aug}})$ , while $F^\top a_1, \ldots, F^\top a_k$ spans $\operatorname{Range}(F^\top E_0)$ . So (notice the first list of vectors has an additional $F^\top w$ ) this means that $\dim(\operatorname{Range}(F^\top E_{\mathrm{aug}})) \neq \dim(\operatorname{Range}(F^\top E_0))$ iff $F^\top w$ is linearly independent from the rest, that is, $F^\top w \notin \operatorname{Range}(F^\top E_0)$ . Note that the rank of a matrix is the dimension of its range (column space), that is, $\dim(\operatorname{Range}(A)) = \operatorname{rank}(A)$ so this is what we wanted to show. + +The next Lemma says that if the projection $F^{\top}w_{\star}$ of the optimal linear model $w_{\star}$ onto the ID subspace $S$ , is not contained in the projection $\mathrm{Range}(F^{\top}E_0)$ of the features, then linear probing incurs non-zero ID error. + +Lemma A.19. In the linear overparameterized setting, under the ID subspace assumption, if $F^{\top}w_{\star} \notin \text{Range}(F^{\top}E_0)$ , then $L_{\mathrm{id}}(v_{\lfloor p}^{\infty}, B_0) > 0$ , where $E_0 \in \mathbb{R}^{d \times k}$ and $F \in \mathbb{R}^{d \times m}$ have orthonormal columns that form a basis for the feature rowspace $R_0 = \text{rowspace}(B_0)$ and ID subspace $S$ respectively. + +Proof. We prove the contrapositive. Suppose $L_{\mathrm{id}}(v_{\mathrm{lp}}^{\infty},B_0) = 0$ . This means that: + +$$ +L _ {\mathrm {i d}} \left(v _ {\mathrm {l p}} ^ {\infty}, B _ {0}\right) = \underset {x \sim P _ {\mathrm {i d}}} {\mathbb {E}} \left[ \left(v _ {\star} ^ {\top} B _ {\star} x - v _ {\mathrm {l p}} ^ {\infty} ^ {\top} B _ {0} x\right) ^ {2} \right] = 0 \tag {A.204} +$$ + +Since the squared error is always non-negative, this means that $v_{\mathfrak{p}}^{\infty} \top B_0 x = w_{\star}^{\top} x$ almost surely when $x \sim P_{\mathrm{id}}$ (recall that we defined $w_{\star} = B_{\star}^{\top} v_{\star}$ ). Recall $P_{\mathrm{id}}$ is defined as: first pick $z \in P_z$ (which has density) and then output $x = Fz$ . Since $P_z$ has density, this implies that we get all examples in the ID subspace $S$ correct: + +$$ +v _ {\mathrm {l p}} ^ {\infty} ^ {\top} B _ {0} x = w _ {\star} ^ {\top} x \text {f o r a l l} x \in S. \tag {A.205} +$$ + +Since the columns of $F$ form an orthonormal basis for $S$ , this gives us (since each column of $F$ is in $S$ ): + +$$ +v _ {\mathrm {l p}} ^ {\infty} ^ {\top} B _ {0} F = w _ {\star} ^ {\top} F. \tag {A.206} +$$ + +Note that the rows of $B_0$ also form an orthonormal basis for $R_0$ just like the columns of $E_0$ . So we can choose $v$ with $v^\top E_0^\top = v_{\mathrm{lp}}^{\infty \top} B_0$ . Then we have: + +$$ +v ^ {\top} E _ {0} ^ {\top} F = w _ {\star} ^ {\top} F \Leftrightarrow F ^ {\top} E _ {0} v = F ^ {\top} w _ {\star} \tag {A.207} +$$ + +$$ +\Leftrightarrow F ^ {\top} w _ {\star} \in \operatorname {R a n g e} \left(F ^ {\top} E _ {0}\right), \tag {A.208} +$$ + +where we took the transpose of both sides in the first step. This finishes the proof of the contrapositive. + +Finally, Lemma A.20 combines Lemma A.18 and Lemma A.19 to give a more interpretable condition for the ID error of linear probing: when the ID subspace $S$ has some components along the optimal linear model $w_{\star}$ and the feature rowspace $R_0$ , then linear probing has non-zero error. This is measured in terms of the principal angle $\cos \theta_{\max}(R_{\mathrm{aug}},S)$ between the ID subspace $S$ and $R_{\mathrm{aug}}$ which is the span of $R_0$ combined with $w_{\star}$ . This angle will typically be non-zero—as an illustrative example, from Lemma A.17 we have that this angle will be non-zero almost surely if the ID subspace $S$ is a uniformly random subspace. + +Lemma A.20. In the linear overparameterized setting, under the ID subspace assumption, let $R_0 = \text{rowspace}(B_0)$ , and $R_{\mathrm{aug}} = \text{Span}(\{w_\star\} \cup R_0)$ . If $w_\star \notin R_0$ and $\cos \theta_{\max}(R_{\mathrm{aug}}, S) > 0$ , then $L_{\mathrm{id}}(v_{\mathrm{lp}}^{\infty}, B_0) > 0$ . + +Proof. After a bit of setup, the proof simply combines Lemma A.18 and Lemma A.19. If $w_{\star} \notin R_0$ , then $R_{\mathrm{aug}}$ has dimension $k + 1$ . Let $E_{\mathrm{aug}} \in \mathbb{R}^{d \times (k + 1)}, F \in \mathbb{R}^{d \times m}$ have orthonormal columns which form a basis for $R_{\mathrm{aug}}$ and $S$ respectively. We assumed $\cos \theta_{\mathrm{max}}(R_{\mathrm{aug}}, S) = \sigma_{\mathrm{min}}(F^{\top}E_{\mathrm{aug}}) > 0$ which means that $F^{\top}E_{\mathrm{aug}}$ is full rank. The ID subspace assumption assumes that $m > k$ . So from Lemma A.18, $F^{\top}w_{\star} \notin \operatorname{Range}(F^{\top}E_0)$ where $E_0 \in \mathbb{R}^{d \times k}$ has orthonormal columns that form a basis for $R_0$ . Then from Lemma A.19, $L_{\mathrm{id}}(v_{\mathrm{lp}}^{\infty}, B_0) > 0$ . + +# A.7 LP-FT + +We start by showing a simple proposition, that if the initial feature extractor is perfect, then linear probing recovers the optimal weights. + +Proposition A.21. In the overparameterized linear setting, let $R = \text{rowspace}(B_0)$ . If $B_0 = B_\star$ , and $\cos \theta_{\max}(S,R) > 0$ , then $L_{\mathrm{odd}}(v_{|\mathfrak{p}}^{\infty},B_0) = 0$ for all $t$ . + +Proof. We first show that because $\cos \theta_{\max}(R, S) > 0$ , the training loss for linear probing is strongly convex. Recall that the training loss is: + +$$ +\widehat {L} (v, B) = \left\| X B ^ {\top} v - Y \right\| _ {2} ^ {2} \tag {A.209} +$$ + +Linear probing keeps $B$ fixed as $B_0 = B_\star$ and only tunes $v$ , so we are interested in the Hessian of the loss with respect to $v$ evaluated at $v, B_\star$ : + +$$ +\operatorname {H e s s} _ {v} \widehat {L} (v, B _ {\star}) = 2 \left(B _ {\star} X ^ {\top}\right) \left(B _ {\star} X ^ {\top}\right) ^ {\top} \tag {A.210} +$$ + +For strong convexity, it suffices to show that the min singular value of the Hessian is bounded away from 0 by a constant. Recall the definition of $\cos \theta_{\mathrm{max}}(\bar{R},S)$ . For some $F$ whose columns form an orthonormal basis for $S$ , we have (since the rows of $B_{\star}$ form an orthonormal basis for $R$ ): + +$$ +\sigma_ {k} \left(B _ {\star} F\right) = \cos \theta_ {\max } (R, S) > 0 \tag {A.211} +$$ + +Note that $B_{\star}F$ is a $k$ -by- $n$ matrix, so if the $k$ -th singular value is positive it must be full rank. Since the columns of $X^{\top}$ span $F$ (since we defined $F$ to be such that the columns of $F$ are an orthonormal basis for $S$ , i.e. the rows of $X$ ), this means $B_{\star}X^{\top}$ is rank $k$ . But that means the Hessian $(B_{\star}X^{\top})(B_{\star}X^{\top})^{\top}$ is rank $k$ as well. So the linear probing loss is strongly convex. + +Since the loss is strongly convex, there is a unique minimizer, and gradient flow converges to that. However, since we are in the well-specified setting, we know the training loss is: + +$$ +\widehat {L} (v, B _ {\star}) = \left\| X B _ {\star} ^ {\top} v - X B _ {\star} ^ {\top} v _ {\star} \right\| _ {2} ^ {2} \tag {A.212} +$$ + +So $v = v_{\star}$ achieves 0 loss and must be the (unique) minimizer. Therefore we have shown that linear probing converges to the unique minimizer $v_{|\mathfrak{p}}^{\infty} = v_{\star}$ , which attains 0 loss, as desired. + +Note that the entire proof works out if $B_0 = UB_{\star}$ for some rotation matrix $U$ . In that case, the Hessian becomes $2U(B_{\star}X^{\top})(B_{\star}X^{\top})^{\top}U^{\top}$ which is still rank $k$ , since multiplying by square rotation matrices does not change the rank. In this case, the minimizer of the loss is $v = Uv_{\star}$ , since $(UB_{\star})^{\top}(Uv_{\star}) = B_{\star}^{\top}v_{\star}$ . So linear probing converges to $v_{\mathrm{lp}}^{\infty} = Uv_{\star}$ , which achieves 0 loss, as desired. + +Restatement of Proposition 3.6. Given perfect pretrained features $B_0 = UB_\star$ for some rotation $U$ . Let $R_0 = \text{rowspace}(B_0)$ . Under the non-degeneracy conditions $\cos\theta_{\max}(R_0, S) \neq 0, \cos\theta_{\max}(R_0, S^\perp) \neq 0$ : + +$$ +\forall t, L _ {\mathrm {o o d}} \left(B _ {\mathrm {f t}} (t) ^ {\top} v _ {\mathrm {f t}} (t)\right) > 0, \text {i f} v _ {0} \sim \mathcal {N} \left(0, \sigma^ {2} I\right) \text {i s r a n d o m l y i n i t i z e d (F T)}, \tag {A.213} +$$ + +$$ +\forall t, L _ {\text {o o d}} \left(B _ {\mathrm {f t}} (t) ^ {\top} v _ {\mathrm {f t}} (t)\right) = 0, \text {i f} v _ {0} \text {i s i n i t i a l i z e d t o} v _ {\mathrm {l p}} ^ {\infty} (\text {L P - F T}). \tag {A.214} +$$ + +Proof. We first use Proposition A.21, which in the proof we showed still works if $B_0 = UB_\star$ for some rotation matrix $U$ (which doesn't have to be identity). We get that $v_{|\mathfrak{p}}^\infty = Uv_\star$ . Then we have $B_0^\top v_{|\mathfrak{p}}^\infty = B_\star^\top v_\star = w_\star$ . + +We now just show that the gradients with respect to the training loss $\widehat{L}$ at $(v_{\mathrm{lp}}^{\infty},B_0)$ is 0, so gradient flow does not update the parameters at all. + +The training loss is: + +$$ +\widehat {L} (v, B) = \left\| X B ^ {\top} v - X B _ {\star} ^ {\top} v _ {\star} \right\| _ {2} ^ {2} \tag {A.215} +$$ + +The derivative with respect to $v$ is: + +$$ +\partial_ {v} \widehat {L} (v, B) = 2 B X ^ {\top} \left(X B ^ {\top} v - X B _ {\star} ^ {\top} v _ {\star}\right) \tag {A.216} +$$ + +Then since $B_0^\top v_{\mathfrak{p}}^\infty = B_\star^\top v_\star$ , we have: + +$$ +\partial_ {v} \widehat {L} \left(v _ {\mathrm {l p}} ^ {\infty}, B _ {0}\right) = 0 \tag {A.217} +$$ + +Next, the derivative with respect to $B$ is: + +$$ +\partial_ {B} \widehat {L} (v, B) = 2 v \left(X B ^ {\top} v - X B _ {\star} ^ {\top} v _ {\star}\right) ^ {\top} X \tag {A.218} +$$ + +Then since $B_0^\top v_{\mathfrak{p}}^\infty = B_\star^\top v_\star$ , we have: + +$$ +\partial_ {B} \widehat {L} \left(v _ {\mathrm {l p}} ^ {\infty}, B _ {0}\right) = 0 \tag {A.219} +$$ + +So since both the derivatives are 0, we have $\partial_t v_{\mathrm{ft}}(t) = 0$ and $\partial_B B_{\mathrm{ft}}(t) = 0$ , which means the parameters don't change at all—at all times $t$ we have $v_{\mathrm{ft}}(t) = U v_\star$ and $B_{\mathrm{ft}}(t) = UB_\star$ which gives us zero OOD loss: $L_{\mathrm{odd}}(B_{\mathrm{ft}}(t)^{\top}v_{\mathrm{ft}}(t)) = 0$ as desired. + +# B MORE INFORMATION ON EXPERIMENTS + +In this Appendix, we include more details on the datasets, pretraining methods, and adaptation methods. We also include the OOD accuracies for fine-tuning and linear probing if we early stop and choose the learning rate based on OOD data, where we see that linear probing is still typically better than fine-tuning OOD. Finally, we include results for additional baselines, pretraining models, and conclude with a discussion about the effective robustness of LP-FT. + +# B.1 OVERVIEW OF DATASETS + +We first give an overview of the datasets used in our paper, before diving into more details of the exact training procedures (e.g., number of epochs, pretraining method, etc). The datasets we use are: + +- **DomainNet** (Peng et al., 2019) is a standard domain adaptation dataset. Here, our ID dataset contains "sketch" images (e.g., drawings of apples, elephants, etc), and the OOD dataset contains "real", "clipart", and "painting" images of the same categories. We use the version of the dataset from Tan et al. (2020). +- Living-17 and Entity-30 are sub-population shift datasets from the BREEDS benchmark (Santurkar et al., 2020). In Living-17 the goal is to classify an image as one of 17 animal categories such as "bear"—for example, the ID dataset contains images of black bears and sloth bears and the OOD dataset has images of brown bears and polar bears. In Entity-30 the goal is to classify an image as one of 30 entities such as "fruit" or "insect". +- FMoW Geo-shift is adapted from the satellite remote sensing dataset Functional Map of the World (Christie et al., 2018; Koh et al., 2021). The goal is to classify a satellite image into one of 62 categories such as "impoverished settlement" or "hospital". Our ID dataset contains images from North America, and the OOD dataset contains images from Africa and Europe. +- CIFAR-10 $\rightarrow$ STL is a standard domain adaptation dataset (French et al., 2018), where the ID is CIFAR-10 (Krizhevsky, 2009), and the OOD is STL (Coates et al., 2011). The task is to classify an image into one of 10 categories such as "dog", "cat", or "airplane"—as usual, we remove the "monkey" class in STL since CIFAR-10 has no "monkey" images. +- CIFAR-10 $\rightarrow$ CIFAR-10.1 (Recht et al., 2018) is a dataset collected using a very similar protocol to CIFAR-10, and the authors describe it as "a minute distributional shift". The hope is that a classifier trained on CIFAR-10 gets high accuracy on CIFAR-10.1. +- ImageNet-1K (Russakovsky et al., 2015) is a large scale dataset containing over a million images, where the goal is to classify an image into one of 1000 categories such as "Yorkshire terrier", "Labrador retriever", "acoustic guitar", "library", "school bus", etc. We fine-tune on ImageNet as the ID dataset, and evaluate on four standard OOD datasets: ImageNetV2 (Recht et al., 2019), ImageNet-R (Hendrycks et al., 2020), ImageNet-A (Hendrycks et al., 2019b), and ImageNet-Sketch (Wang et al., 2019). + +# B.2 DATASET AND METHOD DETAILS + +We use a diverse range of datasets and pretraining strategies. + +- CIFAR-10 $\rightarrow$ STL: We fine-tune or linear probe on CIFAR-10 (Krizhevsky, 2009) and test on STL (Coates et al., 2011). This is a benchmark used in domain adaptation papers (French et al., 2018). CIFAR-10 and STL share 9 classes, so we follow the common practice of omitting the unshared class in STL (which is the 'monkey' class) when reporting accuracies. We use a publicly available MoCo-v2 ResNet-50 checkpoint pretrained on unlabeled examples from ImageNet-1k (Russakovsky et al., 2015), and fine-tune for 20 epochs. + +Table 3: OOD accuracies with $90\%$ confidence intervals over 3 runs, for each of the three OOD domains in the split of DomainNet used by Tan et al. (2020); Prabhu et al. (2021). LP does better than FT across the board, and LP-FT does the best. + +
RealPaintingClipart
Fine-tuning55.29 (0.52)50.26 (0.98)60.93 (2.15)
Linear probing87.16 (0.18)74.50 (0.58)77.29 (0.12)
LP-FT86.82 (0.51)75.91 (0.73)79.48 (0.90)
+ +- **DomainNet:** We use the dataset splits in Tan et al. (2020) which is also used by follow-up work, e.g., in Prabhu et al. (2021). This is different from the original version of the DomainNet dataset (Peng et al., 2019), specifically Tan et al. (2020) note that some domains and classes contain many mistrabeled outliers, so they select the 40 most common classes from the 'sketch', 'real', 'clipart' and 'painting' domains. We use the 'sketch' domain as ID, and all other domains ('real', 'clipart', 'painting') as OOD, and in the main paper we report the average accuracies across the OOD domains. In Table 3 we see that the same trends hold for each of the three OOD domains. We use a CLIP (Radford et al., 2021) pretrained ResNet-50 model, and fine-tune for 50 epochs (since this is a smaller dataset). +- Living-17 and Entity-30: We use a publicly available MoCo-v2 ResNet-50 checkpoint pretrained on unlabeled examples from ImageNet-1k (Russakovsky et al., 2015), and fine-tune for 20 epochs. Note that Living-17 and Entity-30 are subpopulation shifts derived from ImageNet, but the pretraining is done on unlabeled data and does not see any OOD labels, following the pretraining and fine-tuning strategy in Cai et al. (2021). Entity-30 is a relatively large dataset that contains around 140K training examples. +- FMoW Geo-shift: We adapt the version of the dataset from (Koh et al., 2021). We use training data from 'North America' to fine-tune or linear probe, and then evaluate on validation data from Africa and Europe. We use a MoCo-TP (Ayush et al., 2020) checkpoint, pretrained on unlabeled FMoW satellite images. We fine-tune for 50 epochs here since the ID training dataset is smaller (around 20K examples). +- CIFAR-10 $\rightarrow$ CIFAR-10.1 (Recht et al., 2018): We follow the same protocols as CIFAR-10 $\rightarrow$ STL, except we test on CIFAR-10.1. +- ImageNet: we linear probe or fine-tune on ImageNet (Russakovsky et al., 2015), and evaluate on ImageNetV2 (Recht et al., 2019), ImageNet-R (Hendrycks et al., 2020), ImageNet-A (Hendrycks et al., 2019b), and ImageNet-Sketch (Wang et al., 2019). We use a CLIP pretrained ViT-B/16 (vision transformer), the largest publicly available CLIP model (Radford et al., 2021). We ran fine-tuning for 10 epochs, linear probing for 10 epochs. To equalize the runtime for LP-FT, we ran the linear probing stage for 5 epochs, and then the fine-tuning stage for 5 epochs. We used a batch size of 128 for all methods. + +Tuning for ImageNet experiments. We swept over three learning rates for fine-tuning (0.0001, 0.0003, 0.001) and linear probing (0.01, 0.03, 0.1)—as is standard we use larger learning rates for linear probing. For LP-FT, we swept over 3 learning rates (0.01, 0.03, 0.1) for the 5-epoch linear probing step. We took the run that had the best ImageNet (ID) validation accuracy, and then swept over 3 learning rates (0.00001, 0.00003, 0.0001) for the 5-epoch fine-tuning step—we use a lower learning rate for LP-FT since the experiments on the other datasets suggested that the optimal learning rate that maximizes $ID$ validation accuracy for LP-FT is smaller. We did not find the comparisons to be particularly sensitive to learning rate choice. + +Augmentations for ImageNet experiments. We used augmentations for fine-tuning, and no augmentations for linear probing, following Kornblith et al. (2019). This might raise a question of whether linear probing and LP-FT do better OOD because of the lack of augmentations. So as an ablation we also tried fine-tuning without augmentations, however that led to worse accuracy (than fine-tuning with augmentations) both ID and OOD. We now give details on the preprocessing and augmentations that we used. On ImageNet, for linear probing and LP-FT, we used no augmentations—we just resized each image so that the smaller side has size 224 with bicubic interpolation, and then center-crop to a 224-by-224 image. For fine-tuning, we used augmentations: + +Table 4: OOD accuracies with $90\%$ confidence intervals over 3 runs, when fine-tuning gets to choose learning rate and early stop, and linear probing gets to choose $\ell_2$ regularization weights, on OOD data. We see that linear probing still typically does better OOD (the only flip from before is on FMoW). + +
CIFAR-10.1STLEnt-30Liv-17DomNetFMoW
FT92.27 (0.36)85.97 (0.38)64.09 (0.19)78.63 (0.53)59.43 (2.49)40.23 (3.12)
LP82.67 (0.22)86.53 (0.01)69.15 (0.13)82.39 (0.14)79.91 (0.24)37.12 (0.01)
ImNetV2ImNet-RImNet-SkImNet-AAverage
FT71.5 (-)52.4 (-)40.5 (-)27.8 (-)61.3
LP69.7 (-)70.9 (-)46.4 (-)46.1 (-)67.1
+ +Table 5: In-distribution (ID): Average distance that features move before and after fine-tuning or LP-FT, multiplied by 100 to make things easier to read. For linear probing the numbers are all 0, since the features are not tuned. As predicted by our theory, we see that features for ID examples (this table) move more than features for OOD examples (Table 6). Both sets of features change substantially less for LP-FT. As usual we show $90\%$ confidence intervals over three runs. + +
CIFAR-10Entity-30Living-17DomainNetFMoW
FT2.23 (0.03)3.05 (0.02)1.88 (0.01)207.6 (12.31)4.87 (0.15)
LP-FT0.07 (0.00)0.03 (0.01)0.11 (0.01)0.19 (0.03)0.57 (0.19)
+ +specifically we use RandomResizedCrop in TorchVision, with the default arguments and setting the size of the crop to 224, and then apply a random horizontal flip. + +Notes on pretrained model choice. We note that our results say that the pretraining has to be good (e.g., at least get reasonable accuracy ID) for linear probing to outperform fine-tuning OOD. So, for example, we use a model pretrained on unlabeled satellite images for the satellite image dataset—if we pretrain the model on ImageNet, we expect that fine-tuning might do better. Similarly, for DomainNet we use a CLIP pretrained model, which is pretrained on the very large WebImageText dataset, and sees a variety of photo and sketch like images. Pretraining on ImageNet alone does not lead to high accuracies on DomainNet (features are not very good), so we do not necessarily expect linear probing to outperform fine-tuning with these lower quality features (for example, see the MoCo ablation in our main paper where we used a worse pretrained model, and fine-tuning did better OOD). + +Sanity check of fine-tuning implementation. As a sanity check of our implementation, fine-tuning did substantially better than training from scratch on all datasets (both ID and OOD) and matched existing fine-tuning numbers where available (e.g. ResNet50 on CIFAR-10 (Chen et al., 2020b) and Entity-30 (Cai et al., 2021)). Fine-tuning and linear probing also both do substantially better than training from scratch, ID and OOD, across the datasets. For example, on Living-17, training from scratch gets $89.3\%$ ID and $58.2\%$ OOD (Santurkar et al., 2020) which is over $5\%$ worse ID and nearly $20\%$ worse OOD, than all the adaptation methods. For reference linear probing gets $96.5\%$ ID and $82.2\%$ OOD, and fine-tuning gets $97.1\%$ ID and $77.8\%$ OOD. This is even though training from scratch was run for 300 epochs, which is 15 times longer than fine-tuning and LP-FT. + +# B.3 TARGET EARLY STOPPING + +In the main paper, one ablation we mention is early stopping each fine-tuning method and choose the best learning rate based on target validation accuracy. As expected, fine-tuning does improve a little, but linear probing (average accuracy: $67.1\%$ ) is still better than fine-tuning (average accuracy: $61.3\%$ ). Table 4 shows the full results for all datasets. + +# B.4 FEATURE CHANGE + +We examine how much the features changed for ID and OOD examples in each dataset. Specifically, for each dataset, for each input example in the held out validation set, we computed the Euclidean distance of the ResNet-50 features before and after fine-tuning. We averaged these numbers across the dataset, showing the results for ID validation examples in Table 5, and for OOD examples in Table 6. + +Table 6: Out-of-distribution (OOD): Average distance that features move before and after fine-tuning or LP-FT, multiplied by 100 to make things easier to read. For linear probing the numbers are all 0, since the features are not tuned. As predicted by our theory, we see that features for ID examples (Table 5) move more than features for OOD examples (this table). Both sets of features change substantially less for LP-FT. As usual we show $90\%$ confidence intervals over three runs. + +
STLEntity-30Living-17DomainNetFMoW
FT1.70 (0.04)2.60 (0.02)1.67 (0.01)159.97 (16.23)5.62 (0.30)
LP-FT0.04 (0.00)0.02 (0.00)0.09 (0.01)0.18 (0.02)0.54 (0.17)
+ +Table 7: ID and OOD accuracies on Living-17 using a CLIP ResNet-50 model pretrained on the WebImageText dataset, instead of unlabeled ImageNet examples. Similar findings hold—here fine-tuning does similarly to linear probing ID, but does worse than linear probing OOD. LP-FT does better than both ID, and closes $86\%$ of the gap OOD. As usual we show $90\%$ confidence intervals over three runs. + +
IDOOD
LP94.7 (0.2)78.6 (0.5)
FT94.7 (0.1)67.3 (0.8)
LP-FT95.6 (0.2)77.0 (0.6)
+ +The feature distortion theory predicts that the features for ID examples change more than for OOD examples. This bears out in 9 out of 10 cases, that is all cases except for FT on FMoW. To see this, compare each cell in Table 5 with the corresponding cell in Table 6—the former is higher in 9 out of 10 cases. + +The feature distortion theory says that this large feature change is caused because the head is randomly initialized—since the head needs to be updated by a large amount, the feature extractor is also updated a lot because the updates are coupled. Our theory predicts that if the head is initialized via linear probing then the feature extractor should change a lot less for both ID and OOD examples. As predicted by the theory, across all the datasets in Table 5 and Table 6, the features change a lot less for LP-FT than for FT. For example, on CIFAR-10, the features change $30 \times$ less for LP-FT than for FT. + +These results suggest that fine-tuning underperforms OOD, and LP-FT does well ID and OOD, for the reasons predicted by the feature distortion theory. + +# B.5 ADDITIONAL ARCHITECTURES, FINE-TUNING METHODS + +The main contributions of our paper are conceptual understanding and theory. However, to strengthen the empirical investigation we ran two additional models (a CLIP vision transformer and CLIP ResNet-50), as well as three additional fine-tuning heuristics. We focus on the Living-17 dataset because some of these ablations require lots of compute and can take a long time to run on all the datasets. + +Architectures and pretraining source: In the main paper, we showed results when initializing with a MoCo-v2 ResNet-50 model pretrained on unlabeled ImageNet examples. Here we examine how the results change when we 1. Use a ResNet-50 model pretrained on CLIP's WebImageText dataset (Table 7), and, 2. Use a much larger vision transformer model (ViT-B/16) pretrained on CLIP's WebImageText dataset (Table 8)—this is the largest publicly available CLIP model at the time of writing. We see that similar findings to our main paper hold—fine-tuning does better than linear probing ID, but does worse than linear probing ('underperforms') OOD. Finally, LP-FT does better than both methods ID, and closes most $(75\% -90\%)$ of the gap OOD. + +These results are from early stopping on ID validation data. If we early stop on OOD validation data, LP-FT achieves $87.9 \pm 0.4\%$ OOD accuracy, and LP gets $88.3 \pm 0.2\%$ OOD accuracy and here there is no statistically significant difference between the two. On the other hand, even if we early stop on OOD validation data, fine-tuning gets $84.4 \pm 0.5\%$ OOD accuracy which is lower. + +Fine-tuning heuristics: Transfer learning (initializing with a pretrained model, and then adapting it to a downstream task) is the standard way to build modern ML models, because it improves accuracy + +Table 8: ID and OOD accuracies on Living-17 using a CLIP ViT-B/16 (Vision Transformer) model pretrained on the WebImageText dataset, instead of unlabeled ImageNet examples. This is the largest publicly available CLIP model that we could find. The same findings hold—fine-tuning does better than linear probing ID, but does worse than linear probing OOD. LP-FT does better than both ID, and closes $75\%$ of the gap OOD. As usual we show $90\%$ confidence intervals over three runs. + +
IDOOD
LP97.5 (0.1)87.6 (0.5)
FT97.8 (0.0)81.5 (2.1)
LP-FT98.0 (0.0)86.1 (0.1)
+ +and speeds up training. Since this paradigm is so widely used, there are many heuristics people use when training their models (as mentioned in the main paper, LP-FT has sometimes been used as a heuristic as well, although not in the context of OOD). We showed that LP-FT is one way to do well ID and OOD, but we hope that our theory leads to even better fine-tuning algorithms. + +In this section, we compare LP-FT with additional fine-tuning heuristics: using a larger learning rate for the head layer, regularizing the features towards their original values, and side-tuning (Zhang et al., 2020) where we freeze the features but add a side-network. + +The intuitions from our theory suggest two other potential ways to improve OOD accuracy: 1. We could use a higher learning rate on the linear layer, so that the linear layer learns quicker and the features do not get as distorted, and 2. We could regularize the weights of the feature extractor towards the pretrained initialization, to prevent feature distortion. These heuristics have been used in prior work on fine-tuning as well, for example method 2 corresponds to L2-SP in (Li et al., 2018). + +We run these two approaches on Living-17. For approach (1), we use a $10 \times$ higher learning rate for the linear layer, and for approach (2) we regularize the Euclidean distance between the current feature extractor weights (so ignoring the linear head) from the pretrained weights, multiplying by a hyperparameter $\lambda$ . We grid search over the same learning rates as fine-tuning for both methods, and in addition for (2) we grid search over $\lambda \in \{1.0, 0.1, 0.01, 0.001, 0.0001\}$ , so this amounts to sweeping over 30 hyperparameters as opposed to just 6 for fine-tuning and LP-FT. For each hyperparameter configuration we run 3 replication runs with different seeds to reduce the estimation variance, and early stop and model select using ID data just like for fine-tuning and LP-FT. Just like for fine-tuning and LP-FT, we use a cosine learning rate decay and train for the same number of epochs. Indeed, we find that both (1) and (2) are able to close part of the OOD gap between fine-tuning and linear probing. However, LP-FT does better than both methods ID and OOD. The full results are in Table 9. + +We also compare with another method, (3) side-tuning (Zhang et al., 2020). Side-tuning freezes the pretrained features $g(x)$ but trains another 'side' model $s(x)$ , and then outputs $v^{\top}(g(x) + h(x))$ , where the head $v$ and the parameters of the side model $s$ are tuned. The intuition for trying this is that side-tuning also preserves the pretrained features which likely reduces feature distortion. In the supplementary of Zhang et al. (2020) they use a ResNet-50 for both the original model and the side model in their vision experiments, so we do the same. We sweep over twelve learning rates $(3 \cdot 10^{-5}, 1 \cdot 10^{-4}, 3 \cdot 10^{-4}, \dots, 1.0, 3.0, 10.0)$ , with three replication runs with different seeds for each learning rate. Just like for fine-tuning and LP-FT, we use a cosine learning rate decay and train for the same number of epochs, and we early stop and model select using ID validation data. We checked that the best learning rate was not at the boundary of the grid search. On OOD, side-tuning (81.0%) improves over fine-tuning (77.7%). However, side-tuning doesn't do as well ID. LP-FT did better ID and OOD. This could be because side-tuning does not get to refine the pretrained features for the ID task—while the side-network is powerful enough to learn good features, it is initialized randomly and effectively trained from scratch, so it might not be able to learn these good features on the limited sized training dataset (around 40K examples). The results are also in Table 9. + +We also include results for training from scratch in Table 9—these results are from Santurkar et al. (2020). Note that training from scratch was done for 450 epochs, whereas fine-tuning was done for 20 epochs. As a sanity check, all the fine-tuning methods and linear probing do substantially better than training from scratch, both ID and OOD. + +Table 9: ID and OOD accuracies on Living-17 including three additional fine-tuning heuristics, where we (1) Use a $10 \times$ larger learning rate for the head, or (2) Regularize the Euclidean distance of the feature extractor weights to the pretrained initialization, and (3) side-tuning where we freeze the pretrained model but add a side network that is fine-tuned. As a sanity check, all methods do better than training from scratch ID and OOD, and we show $90\%$ confidence intervals over three runs. As per the intuitions from the feature distortion theory, these methods do mitigate feature distortion to some extent and improve OOD accuracy over fine-tuning. LP-FT does better than all methods ID and OOD—nonetheless, we believe that LP-FT is just the first step and hope that our theory can be used to inspire or derive better algorithms. + +
IDOOD
Scratch92.4 (1.3)58.2 (2.4)
LP96.5 (0.1)82.2 (0.2)
FT97.1 (0.1)77.7 (0.7)
FT (10x Linear)97.2 (0.2)80.4 (0.3)
FT (regularized)97.1 (0.2)80.0 (0.4)
Side-tuning95.5 (0.4)81.0 (0.7)
LP-FT97.8 (0.1)82.6 (0.3)
+ +# B.6 DISCUSSION OF EFFECTIVE ROBUSTNESS + +LP-FT gets higher OOD accuracy than fine-tuning, but it sometimes gets higher ID accuracy as well. Taori et al. (2020) and Miller et al. (2021) show that OOD accuracy can often be correlated with ID accuracy, and suggest examining the effective robustness: intuitively the extra gain in OOD accuracy than can be predicted from improved ID accuracy alone. Is LP-FT simply better in-distribution, or does it have higher effective robustness as well? + +We start out by noting that linear probing clearly has higher effective robustness in most of our datasets. Linear probing does worse than fine-tuning ID so based on the effective robustness framework we would expect it to do worse than fine-tuning OOD as well. However, linear probing does better than fine-tuning OOD and therefore has higher effective robustness. + +![](images/67f8623447ea31f6800e9a11089d9bcb2892a07edf503ce19d377b43640d9f3f.jpg) +Figure 3: We plot the OOD accuracy against ID accuracy on Living-17 for the three methods we consider, when we start from three different pretrained models (CLIP ResNet-50, CLIP ViT-B/16, MoCoV2 ResNet-50). The line for linear probing and LP-FT lie above fine-tuning which suggests that they have higher effective robustness. Each point is produced by averaging over three random seeds. + +The solutions found by LP-FT also appear to have higher effective robustness than fine-tuning, because when they have similar ID accuracy, LP-FT does much better OOD. For a few pieces of evidence: + +1. On CIFAR-10 $\rightarrow$ STL, there is no statistically significant difference between FT and LP-FT on ID, but LP-FT gets $8\%$ higher accuracy OOD in Table 2. +2. If we look at checkpoints earlier in training for CIFAR-10 $\rightarrow$ STL we can exactly equalize ID accuracy and compare OOD accuracies. In-distribution, LP-FT and FT both get $97.2\%$ accuracy, but OOD, LP-FT $(90.2\%)$ is much better than FT $(81.8\%)$ . + +3. Finally, in Figure 3 we plot the OOD accuracy against the ID accuracy for fine-tuning and LP-FT on Living-17. We plot these for three different pretrained models (CLIP ResNet-50, CLIP ViT-B/16, MoCo-V2 ResNet-50). We see that the ID-OOD line for LP-FT is above the line for FT indicating effective robustness. + +Note that higher effective robustness does not mean a method is better. For example, a method A can have higher effective robustness B by doing a lot worse in-distribution even when they have the same OOD accuracy. In this case, A is clearly inferior since it does worse ID and same OOD, but has higher effective robustness because of its worse ID accuracy. + +We believe the finding that linear probing and LP-FT has higher effective robustness than fine-tuning when the distribution shift is large is particularly interesting because Taori et al. (2020) and Miller et al. (2021) show that it is uncommon for methods to have higher effective robustness. In our case linear probing and LP-FT appear to consistently have higher effective robustness which suggests that with good transfer learning methods we can get both high in-distribution accuracy and higher effective robustness. + +# C ADDITIONAL RELATED WORK + +Theoretical analysis of overparameterized models. Modern deep learning presents an interesting paradigm for theoretical analysis where the number of parameters is much larger than the number of training points. The model class is highly expressive and several solutions obtain zero training loss even in the presence of noise. Such overparameterized models have received a lot of interest recently especially with a focus on understanding "benign overfitting" or the phenomenon where fitting noisy training data to zero loss leads to classifiers that generalize well. By analyzing different linear overparameterized settings Belkin et al. (2019); Hastie et al. (2019); Bartlett et al. (2019); Muthukumar et al. (2020); Mei & Montanari (2019); Bibas et al. (2019) study various statistical properties such as the "double descent curve" in addition to benign overfitting. One important aspect of overparameterized models is that there is no unique minimizer of the training loss. We need some inductive bias which is typically implicit via the optimization procedure. Prior works study the statistical properties of the explicit inductive bias of minimum norm interpolation. In contrast, we study the effect of gradient based optimization from a particular pretrained initialization where we effectively capture the exact implicit inductive bias of gradient based fine tuning. \ No newline at end of file diff --git a/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/images.zip b/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4ffa9041c554af0a8802f6085aaca083820f7a92 --- /dev/null +++ b/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9921b2f85a13854e82dd394b5939619d1ea8be2e5d3c3d404db0204875e54e8d +size 1741360 diff --git a/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/layout.json b/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..688989e46a16a6ea3564ec7ed8c7c15cda95fc85 --- /dev/null +++ b/finetuningcandistortpretrainedfeaturesandunderperformoutofdistribution/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c446767e4855a01d60d313a5cd93f86eb4a11729d3e4ee4492d7771f37a265f9 +size 2155670 diff --git a/frameaveragingforinvariantandequivariantnetworkdesign/a6648ec8-fdec-4cd5-aa35-16aca705ff3b_content_list.json b/frameaveragingforinvariantandequivariantnetworkdesign/a6648ec8-fdec-4cd5-aa35-16aca705ff3b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6c8db44d05fdd3fd611228e4353757509803b0ab --- /dev/null +++ b/frameaveragingforinvariantandequivariantnetworkdesign/a6648ec8-fdec-4cd5-aa35-16aca705ff3b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7541af465bd67c753fb1d0ec71d416738f0b520e040ec7d2f66ceadb8651707c +size 146577 diff --git a/frameaveragingforinvariantandequivariantnetworkdesign/a6648ec8-fdec-4cd5-aa35-16aca705ff3b_model.json b/frameaveragingforinvariantandequivariantnetworkdesign/a6648ec8-fdec-4cd5-aa35-16aca705ff3b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..724a0a646ad06b15c332b2697d34c65ae5805d22 --- /dev/null +++ b/frameaveragingforinvariantandequivariantnetworkdesign/a6648ec8-fdec-4cd5-aa35-16aca705ff3b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:84e4749bf5b30834f375849fbf22995731db06e63036e3fb47805048bbef5cf1 +size 174845 diff --git a/frameaveragingforinvariantandequivariantnetworkdesign/a6648ec8-fdec-4cd5-aa35-16aca705ff3b_origin.pdf b/frameaveragingforinvariantandequivariantnetworkdesign/a6648ec8-fdec-4cd5-aa35-16aca705ff3b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..289fe42c3aaa1046ac0097292a14dbc35a221d84 --- /dev/null +++ b/frameaveragingforinvariantandequivariantnetworkdesign/a6648ec8-fdec-4cd5-aa35-16aca705ff3b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:af3e3f7f0cb560a91cb2502f84536b6cbf6f217dbad10750851b0f26d91e914a +size 1311926 diff --git a/frameaveragingforinvariantandequivariantnetworkdesign/full.md b/frameaveragingforinvariantandequivariantnetworkdesign/full.md new file mode 100644 index 0000000000000000000000000000000000000000..da73023322698c2b7ced5a028cfb291ad20aee03 --- /dev/null +++ b/frameaveragingforinvariantandequivariantnetworkdesign/full.md @@ -0,0 +1,588 @@ +# FRAME AVERAGING FOR INVARIANT AND EQUIVARIANT NETWORK DESIGN + +Omri Puny\*1 Matan Atzmon\*1 Heli Ben-Hamu\*1 Ishan Misra2 + +Aditya Grover² Edward J. Smith² Yaron Lipman²¹ + +1Weizmann Institute of Science 2Facebook AI Research + +# ABSTRACT + +Many machine learning tasks involve learning functions that are known to be invariant or equivariant to certain symmetries of the input data. However, it is often challenging to design neural network architectures that respect these symmetries while being expressive and computationally efficient. For example, Euclidean motion invariant/equivariant graph or point cloud neural networks. + +We introduce Frame Averaging (FA), a general purpose and systematic framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types. Our framework builds on the well known group averaging operator that guarantees invariance or equivariance but is intractable. In contrast, we observe that for many important classes of symmetries, this operator can be replaced with an averaging operator over a small subset of the group elements, called a frame. We show that averaging over a frame guarantees exact invariance or equivariance while often being much simpler to compute than averaging over the entire group. Furthermore, we prove that FA-based models have maximal expressive power in a broad setting and in general preserve the expressive power of their backbone architectures. Using frame averaging, we propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs. We demonstrate the practical effectiveness of FA on several applications including point cloud normal estimation, beyond 2-WL graph separation, and $n$ -body dynamics prediction, achieving state-of-the-art results in all of these benchmarks. + +# 1 INTRODUCTION + +Many tasks in machine learning (ML) require learning functions that are invariant or equivariant with respect to symmetric transformations of the data. For example, graph classification is invariant to a permutation of its nodes, while node prediction tasks are equivariant to node permutations. Consequently, it is important to design expressive neural network architectures that are by construction invariant or equivariant for scalable and efficient learning. This recipe has proven to be successful for many ML tasks including image classification and segmentation (LeCun et al., 1998; Long et al., 2015), set and point-cloud learning (Zaheer et al., 2017; Qi et al., 2017a), and graph learning (Kipf & Welling, 2016; Gilmer et al., 2017; Battaglia et al., 2018). + +Nevertheless, for some important instances of symmetries, the design of invariant and/or equivariant networks is either illusive (Thomas et al., 2018; Dym & Maron, 2020), computationally expensive or lacking in expressivity (Xu et al., 2018a; Morris et al., 2019; Maron et al., 2019; Murphy et al., 2019). In this paper, we propose a new general-purpose framework, called Frame Averaging (FA), that can systematically facilitate expressive invariant and equivariant networks with respect to a broad class of groups. At the heart of our framework, we build on a basic fact that arbitrary functions $\phi : V \to \mathbb{R}$ , $\Phi : V \to W$ , where $V, W$ are some vector spaces, can be made invariant or equivariant by symmetrization, that is averaging over the group (Yarotsky, 2021; Murphy et al., 2018), i.e., + +$$ +\psi (X) = \frac {1}{| G |} \sum_ {g \in G} \phi (g ^ {- 1} \cdot X) \quad \text {o r} \quad \Psi (X) = \frac {1}{| G |} \sum_ {g \in G} g \cdot \Phi (g ^ {- 1} \cdot X). \tag {1} +$$ + +where $G = \{g\}$ denotes the group, $\psi : V \to \mathbb{R}$ is invariant and $\Psi : V \to W$ is equivariant with respect to $G$ . Furthermore, since invariant and equivariant functions are fixed under group averaging, i.e., $\psi = \phi$ for invariant $\phi$ and $\Psi = \Phi$ for equivariant $\Phi$ , the above scheme often leads to universal (i.e., maximally expressive) models (Yarotsky, 2021). However, the challenge with equation 1 is that when the cardinality of $G$ is large (e.g., combinatorial groups such as permutations) or infinite (e.g., continuous groups such as rotations), then exact averaging is intractable. In such cases, we are forced to approximate the sum via heuristics or Monte Carlo (MC), thereby sacrificing the exact invariance/equivalence property for computational efficiency, e.g., Murphy et al. (2018; 2019) define heuristic averaging strategies for approximate permutation inness in GNNs; similarly, Hu et al. (2021) and Shuaibi et al. (2021) use MC averaging for approximate rotation equivariance in GNNs. A concurrent approach is to find cases where computing the symmetrization operator can be done more efficiently (Sannai et al., 2021). + +The key observation of the current paper is that the group average in equation 1 can be replaced with an average over a carefully selected subset $\mathcal{F}(X) \subset G$ while retaining both exact invariance/equivalence and expressive power. Therefore, if $\mathcal{F}$ can be chosen so that the cardinality $|\mathcal{F}(X)|$ is mostly small, averaging over $\mathcal{F}(X)$ results in both expressive and efficient invariant/equivariant model. We call the set-valued function $\mathcal{F}: V \to 2^G$ , a frame, and show that it can successfully replace full group averaging if it satisfies a set equivariance property. We name this framework Frame Averaging (FA) and it serves as the basis for the design of invariant/equivariant networks in this paper. + +We instantiate the FA framework by considering different choices of symmetry groups $G$ , their actions on data spaces $V, W$ (manifested by choices of group representations), and the backbone architectures (or part thereof) $\phi, \Phi$ we want to make invariant/equivariant to $G$ . We consider: (i) Multi-Layer Perceptrons (MLP), and Graph Neural Networks (GNNs) with node identification (Murphy et al., 2019; Loukas, 2020) adapted to permutation invariant Graph Neural Networks (GNNs); (ii) Message-Passing GNN (Gilmer et al., 2017) adapted to be invariant/equivariant to Euclidean motions, $E(d)$ ; (iii) Set network, DeepSets and PointNet (Zaheer et al., 2017; Qi et al., 2017a) adapted to be equivariant or locally equivariant to $E(d)$ ; (iv) Point cloud network, DGCNN (Wang et al., 2018), adapted to be equivariant to $E(d)$ . + +Theoretically, we prove that the FA framework maintains the expressive power of its original backbone architecture which leads to some interesting corollaries: First, (i) results in invariant universal graph learning models; (ii) is an $E(d)$ invariant/equivariant GNN that maintain the power of message passing (Xu et al., 2018a; Morris et al., 2019); and (iii), (iv) furnish a universal permutation and $E(d)$ invariant/equivariant models. We note that both the construction and the proofs are arguably considerably simpler than the existing alternative constructions and proofs for this type of symmetry (Thomas et al., 2018; Fuchs et al., 2020; Dym & Maron, 2020). We experimented with FA on different tasks involving symmetries including: point-cloud normal estimation, beyond 2-Weisfeiler-Lehman graph separation, and $n$ -body dynamics predictions, reaching state of the art performance in all. + +# 2 FRAME AVERAGING + +In this section we introduce the FA approach using a generic formulation; in the next section we instantiate FA to different problems of interest. + +# 2.1 FRAME AVERAGING FOR FUNCTION SYMMETRIZATION + +Let $\phi : V \to \mathbb{R}$ and $\Phi : V \to W$ be some arbitrary functions, where $V, W$ are normed linear spaces with norms $\| \cdot \|_V$ , $\| \cdot \|_W$ , respectively. For example, $\phi, \Phi$ can be thought of as neural networks. We consider a group $G = \{g\}$ that describes some symmetry we want to incorporate into $\phi, \Phi$ . The way the symmetries $g \in G$ are applied to vectors in $V, W$ is described by the group's representations $\rho_1: G \to \mathrm{GL}(V)$ , and $\rho_2: G \to \mathrm{GL}(W)$ , where $\mathrm{GL}(V)$ is the space of invertible linear maps $V \to V$ (automorphisms). A representation $\rho_i$ preserves the group structure by satisfying $\rho_i(gh) = \rho_i(g)\rho_i(h)$ for all $g, h \in G$ (see e.g., Fulton & Harris (2013)). As customary, we will sometimes refer to the linear spaces $V, W$ as representations. + +Our goal is to make $\phi$ into an invariant function, namely satisfy $\phi(\rho_1(g)X) = \phi(X)$ , for all $g \in G$ and $X \in V$ ; and $\Phi$ into an equivariant function, namely $\Phi(\rho_1(g)X) = \rho_2(g)\Phi(X)$ , for all $g \in G$ and $X \in V$ . We will do that by averaging over group elements, but instead of averaging over the entire group every time (as in equation 1) we will average on a subset of the group elements called a frame. + +Definition 1. A frame is defined as a set valued function $\mathcal{F}:V\to 2^G\setminus \emptyset$ + +1. A frame is $G$ -equivariant if $\mathcal{F}(\rho_1(g)X) = g\mathcal{F}(X), \quad \forall X \in V, g \in G$ , where as usual, $g\mathcal{F}(X) = \{gh \mid h \in \mathcal{F}(X)\}$ , and the equality should be understood as equality of sets. +2. A frame is bounded over a domain $K \subset V$ if there exists a constant $c > 0$ so that $\| \rho_2(g) \|_{\mathrm{op}} \leq c$ , for all $g \in \mathcal{F}(X)$ and all $X \in K$ , where $\| \cdot \|_{\mathrm{op}}$ denotes the induced operator norm over $W$ . + +Figure 1 provides an illustration. How are equivariant frames useful? Consider a scenario where an equivariant frame is easy to compute, and furthermore its cardinality, $|\mathcal{F}(X)|$ , is not too large. Then averaging over the frame, denoted $\langle \cdot \rangle_{\mathcal{F}}$ and defined by + +$$ +\langle \phi \rangle_ {\mathcal {F}} (X) = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in \mathcal {F} (X)} \phi \left(\rho_ {1} (g) ^ {- 1} X\right) \tag {2} +$$ + +$$ +\left\langle \Phi \right\rangle_ {\mathcal {F}} (X) = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in \mathcal {F} (X)} \rho_ {2} (g) \Phi \left(\rho_ {1} (g) ^ {- 1} X\right) \tag {3} +$$ + +![](images/0c0f74da84bc0bf967bc969608f5f30d49337a720707b77baf300975e1f91f8c.jpg) +Figure 1: Frame equivariance (sphere shape represents the group $G$ ; square represents $V$ ). + +provides the required function symmetrization. In Appendix A.1 we prove: + +Theorem 1 (Frame Averaging). Let $\mathcal{F}$ be a $G$ equivariant frame, and $\phi :V\to \mathbb{R}$ , $\Phi :V\to W$ some functions. Then, $\langle \phi \rangle_{\mathcal{F}}$ is $G$ invariant, while $\langle \Phi \rangle_{\mathcal{F}}$ is $G$ equivariant. + +Several comments are in order: First, the invariant case (equation 2) is a particular case of the equivariant case (equation 3) under the choice of $W = \mathbb{R}$ and the trivial representation $\rho_2(g) \equiv 1$ . Second, in this paper we only consider $X$ and frame choices $\mathcal{F}$ for which $\mathcal{F}(X)$ are finite sets. Nevertheless, treating the infinite case is an important future research direction. Third, a trivial choice of an equivariant frame is $\mathcal{F}(X) \equiv G$ , that is, taking the frame to be the entire group for all $X \in V$ (for infinite but compact $G$ the sum in the FA in this case can be replaced with Harr integral). This choice can be readily checked to be equivariant, and turns the FA equations 2, 3 into standard group averaging operators, equation 1. The problem with this choice, however, is that it often results in an intractable or challenging computation, e.g., when the group is large or infinite. In contrast, as we show below, in some useful cases one can compute a manageable size frame and can use it to build invariant or equivariant operators in a principled way. Let us provide a simple example for Frame Averaging: consider $V = \mathbb{R}^n$ , $W = \mathbb{R}$ , and $G = \mathbb{R}$ with addition as the group action. We choose the group actions1 in this case to be $\rho_1(a)\pmb{x} = \pmb{x} + a\mathbf{1}$ , and $\rho_2(a)b = b + a$ , where $a,b \in \mathbb{R}$ , $\pmb{x} \in \mathbb{R}^n$ , and $\mathbf{1} \in \mathbb{R}^n$ is the vector of all ones. We can define the frame in this case using the averaging operator $\mathcal{F}(\pmb{x}) = \left\{\frac{1}{n}\mathbf{1}^T\pmb{x}\right\} \subset G = \mathbb{R}$ . Note that in this case the frame contains only one element from the group, in other cases finding such a small frame is hard or even impossible. One can check that this frame is equivariant per Definition 1. The FA of $\phi : \mathbb{R}^n \to \mathbb{R}$ would be $\langle \phi \rangle_{\mathcal{F}}(\pmb{x}) = \phi (\pmb{x} - \frac{1}{n} (\pmb{1}^T\pmb{x})\pmb{1}) + \frac{1}{n}\pmb{1}^T\pmb{x}$ in the invariant case, and $\langle \phi \rangle_{\mathcal{F}}(\pmb{x}) = \phi (\pmb{x} - \frac{1}{n} (\pmb{1}^T\pmb{x})\pmb{1}) + \frac{1}{n}\pmb{1}^T\pmb{x}$ in the equivariant case. + +Incorporating $G$ as a second symmetry. An important use case of frame averaging is with the backbones $\phi, \Phi$ already invariant/equivariant w.r.t. some symmetry group $H$ and our goal is to make it invariant/equivariant to $H \times G$ . For example, say we want to add $G = E(3)$ equivariance to permutation invariant set or graph functions, i.e., $H = S_{n}$ . We will provide sufficient conditions for the FA to provide this desired invariance/equivariance. First, let us assume $H$ is acting on $V$ and $W$ by the representations $\tau_{1}: H \to \mathrm{GL}(V)$ and $\tau_{2}: H \to \mathrm{GL}(W)$ , respectively. Assume $\phi$ is $H$ invariant and $\Phi$ is $H$ equivariant. We say that representations $\rho_{1}$ and $\tau_{1}$ commute if $\rho_{1}(g)\tau_{1}(h)X = \tau_{1}(h)\rho_{1}(g)X$ for all $g \in G$ , $h \in H$ , and $X \in V$ . If $\rho_{1}$ and $\tau_{1}$ commute then the map $\gamma_{1}: H \times G \to \mathrm{GL}(V)$ defined by $\gamma_{1}(h,g) = \tau_{1}(h)\rho_{1}(g)$ is a representation of the group $H \times G$ . Second, we would need that the frame $\mathcal{F}(X)$ is invariant to $H$ , that is $\mathcal{F}(\tau_1(h)X) = \mathcal{F}(X)$ . We show a generalization of Theorem 1: + +Theorem 2 (Frame Average second symmetry). Assume $\mathcal{F}$ is $H$ -invariant and $G$ -equivariant. Then, + +1. If $\phi : V \to \mathbb{R}$ is $H$ invariant and $\rho_1, \tau_1$ commute then $\langle \phi \rangle_{\mathcal{F}}$ is $G \times H$ invariant. +2. If $\Phi : V \to W$ is $H$ equivariant and $\rho_i, \tau_i, i = 1,2$ , commute then $\langle \Phi \rangle_{\mathcal{F}}$ is $G \times H$ equivariant. + +Right actions. Above we used left actions for the definition of equivariance. There are other flavors of equivariance, e.g., if one of the actions is right. For example, if $g$ multiplies $\mathcal{F}(X)$ from the right, then equivariance will take the form: + +$$ +\mathcal {F} \left(\rho_ {1} (g) X\right) = \mathcal {F} (X) g ^ {- 1}, \quad \forall X \in V, g \in G \tag {4} +$$ + +and accordingly + +$$ +\langle \phi \rangle_ {\mathcal {F}} (X) = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in \mathcal {F} (X)} \phi (\rho_ {1} (g) X), \quad \langle \Phi \rangle_ {\mathcal {F}} (X) = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in \mathcal {F} (X)} \rho_ {2} (g) ^ {- 1} \Phi (\rho_ {1} (g) X) \tag {5} +$$ + +are $G$ invariant and equivariant, respectively. + +Efficient calculation of invariant frame averaging. There could be instances of the FA framework (indeed we discuss such a case later) where $|\mathcal{F}(X)|$ is still too large to evaluate equations 2,3. In the invariant case, there is a more efficient form of FA, that can potentially be applied. To show it, let us start by defining the subgroup of symmetries of $X$ , i.e., its stabilizer. The stabilizer of an element $X \in V$ is a subgroup of $G$ defined by $G_X = \{g \in G \mid \rho_1(g)X = X\}$ . $G_X$ naturally induces an equivalence relation $\sim$ on $\mathcal{F}(X)$ , with $g \sim h \iff hg^{-1} \in G_X$ . The equivalence classes (orbits) are $[g] = \{h \in \mathcal{F}(X) | g \sim h\} = G_X g \subset \mathcal{F}(X)$ , for $g \in \mathcal{F}(X)$ , and the quotient set is denoted $\mathcal{F}(X) / G_X$ . + +Theorem 3. Equivariant frame $\mathcal{F}(X)$ is a disjoint union of equal size orbits, $[g] \in \mathcal{F}(X) / G_X$ . + +The proof is in A.3. The first immediate consequence of Theorem 3 is that the cardinality of $\mathcal{F}(X)$ is at-least that of the stabilizer (intuitively, the inner-symmetries) of $X$ , namely $|\mathcal{F}(X)| \geq |G_X|$ . Therefore, there could be cases, such as when $X$ describes a symmetric graph, where $|\mathcal{F}(X)|$ could be too large to average over. A remedy comes from the following observation: for every $h \in [g]$ , we have that $h = rg$ , $r \in G_X$ , and $\phi(\rho_1(h)^{-1}X) = \phi(\rho_1(g)^{-1}\rho_1(r)^{-1}X) = \phi(\rho_1(g)^{-1}X)$ , since also $r^{-1} \in G_X$ . Therefore the summands in equations 2 are constant over orbits, and we get + +$$ +\langle \phi \rangle_ {\mathcal {F}} (X) = \frac {1}{m _ {\mathcal {F}}} \sum_ {[ g ] \in \mathcal {F} (X) / G _ {X}} \phi \left(\rho_ {1} (g) ^ {- 1} X\right), \tag {6} +$$ + +where $m_{\mathcal{F}} = |\mathcal{F}(X) / G_X| = |\mathcal{F}(X)| / |G_X|$ . This representation of invariant FA requires only $m_{\mathcal{F}} = |\mathcal{F}(X)| / |G_X|$ evaluations, compared to $|\mathcal{F}(X)|$ in the original FA in equation 2. + +Approximation of invariant frame averaging. Unfortunately, enumerating $\mathcal{F}(X) / G_X$ could be challenging in some cases. Nevertheless, equation 6 is still very useful: it turns out we can easily draw a random element from $\mathcal{F}(X) / G_X$ with uniform probability. This is an immediate application of the equal orbit size in Theorem 3: + +Corollary 1. Let $\mathcal{F}(X)$ be an equivariant frame, and $g \in \mathcal{F}(X)$ be a uniform random sample. Then $[g] \in \mathcal{F}(X) / G_X$ is also uniform. + +Therefore, an efficient approximation strategy is averaging over uniform samples, $g_{i} \in \mathcal{F}(X), i \in [k]$ + +$$ +\langle \langle \phi \rangle \rangle_ {\mathcal {F}} (X) = \frac {1}{k} \sum_ {i = 1} ^ {k} \phi \left(\rho_ {1} \left(g _ {i}\right) ^ {- 1} X\right). \tag {7} +$$ + +This approximation is especially useful, compared to the full-blown FA, when $m_{\mathcal{F}} = |\mathcal{F}(X)| / |G_X|$ is small, i.e., when $|G_{X}|$ is large, or $X$ has many symmetries. Intuitively, the smaller $m_{\mathcal{F}}$ the better the approximation in equation 7. A partial explanation to this phenomenon is given in Appendix A.4, while an empirical validation is provided in Section 5.2. + +# 2.2 EXPRESSIVE POWER + +Another benefit in frame averaging as presented in equations 2 and 3 is that it preserves the expressive power of the base models, $\phi$ , $\Phi$ , as explained next. Consider some hypothesis function space $\mathcal{H} = \{\Phi\} \subset \mathcal{C}(V,W)$ , where $\mathcal{C}(V,W)$ is the set of all continuous functions $V \to W$ . As mentioned above, the case of scalar functions $\phi$ is a special case where $W = \mathbb{R}$ , and $\rho_2(g) \equiv 1$ . $\mathcal{H}$ can be seen as the collection of all functions represented by a certain class of neural networks, e.g., Multilayer Perceptron (MLP), or DeepSets (Zaheer et al., 2017), or Message Passing Graph Neural Networks (Gilmer et al., 2017). We denote by $\langle \mathcal{H}\rangle$ the collection of functions $\Phi \in \mathcal{H}$ after applying the frame averaging in equation 3, $\langle \mathcal{H}\rangle = \{\langle \Phi\rangle_{\mathcal{F}} \mid \Phi \in \mathcal{H}\}$ . + +We set some domain $K \subset V$ over which we would like to test the approximation power of $\langle \mathcal{H} \rangle$ . To make sure that FA is well defined over $K$ we will assume it is frame-finite, i.e., for every $X \in K$ , $\mathcal{F}(X)$ is a finite set. Next, we denote $K_{\mathcal{F}} = \{\rho_1(g)^{-1}X \mid X \in K, g \in \mathcal{F}(X)\}$ ; intuitively, $K_{\mathcal{F}} \subset V$ contains all the points sampled by the FA operator. Lastly, to quantify approximation error over a set $A \subset V$ let us use the maximum norm $\| \Phi \|_{A,W} = \max_{X \in A} \| \Phi(X) \|_W$ . We prove that an arbitrary equivariant function $\Psi \in \mathcal{C}(V,W)$ approximable by a function from $\mathcal{H}$ over $K_{\mathcal{F}}$ is approximable by an equivariant function from $\langle \mathcal{H} \rangle$ (Proof details are found on Appendix A.5). + +Theorem 4 (Expressive power of FA). If $\mathcal{F}$ is a bounded $G$ -equivariant frame, defined over a frame-finite domain $K$ , then for an arbitrary equivariant function $\Psi \in \mathcal{C}(V,W)$ we have + +$$ +\inf _ {\Phi \in \mathcal {H}} \left\| \Psi - \left\langle \Phi \right\rangle_ {\mathcal {F}} \right\| _ {K, W} \leq c \inf _ {\Phi \in \mathcal {H}} \left\| \Psi - \Phi \right\| _ {K _ {\mathcal {F}}, W}, +$$ + +Where $c$ is the constant from Definition 1. + +This theorem can be used to prove universality results if the backbone model is universal, even for non-compact groups (e.g., the Euclidean motion group). Below we will use it to prove universality of frame averaged architectures for graphs, and point clouds with Euclidean motion symmetry. + +# 3 MODEL INSTANCES + +We instantiate the FA framework by specifying: i) The symmetry group $G$ , representations $\rho_{1}, \rho_{2}$ and the underlying frame $\mathcal{F}$ ; ii) The backbone architecture for $\phi$ (invariant) or $\Phi$ (equivariant). + +# 3.1 POINT CLOUDS, EUCLIDEAN MOTIONS. + +Symmetry. We would like to incorporate Euclidean symmetry to existing permutation invariant/equivariant point cloud networks. The symmetry of interest is $G = E(d) = O(d) \ltimes T(d)$ , namely the group of Euclidean motions in $\mathbb{R}^d$ defined by rotations and reflections $O(d)$ , and translations $T(d)$ . We also discuss $G = SE(d) = SO(d) \ltimes T(d)$ , where $SO(d)$ is the group of rotations in $\mathbb{R}^d$ . We define $V = \mathbb{R}^{n \times d}$ , and the group representation2 is $\rho_1(g)\pmb{X} = \pmb{X}\pmb{R}^T +\pmb{1}\pmb{t}^T$ , where $\pmb{R} \in O(d)$ or $\pmb{R} \in SO(d)$ , and $\pmb{t} \in \mathbb{R}^d$ denotes the translation. $W, \rho_{2}$ are defined similarly, unless translation invariance is desired in which case we use the representation $\rho_{2}(g)\pmb{X} = \pmb{X}\pmb{R}^{T}$ . + +Frame. We define the frame $\mathcal{F}(\pmb{X})$ in this case based on Principle Component Analysis (PCA), as follows. Let $\pmb{t} = \frac{1}{n}\pmb{X}^T\pmb{1} \in \mathbb{R}^d$ be the centroid of $\pmb{X}$ , and $\pmb{C} = (\pmb{X} - \pmb{1}\pmb{t}^T)^T(\pmb{X} - \pmb{1}\pmb{t}^T) \in \mathbb{R}^{d\times d}$ the covariance matrix computed after removing the centroid from $\pmb{X}$ . In the generic case the eigenvalues of $\pmb{C}$ satisfy $\lambda_1 < \lambda_2 < \dots < \lambda_d$ . Let $\pmb{v}_1, \pmb{v}_2, \dots, \pmb{v}_d$ be the unit length corresponding eigenvectors. Then we define $\mathcal{F}(\pmb{X}) = \{([\alpha_1\pmb{v}_1, \dots, \alpha_d\pmb{v}_d], t) | \alpha_i \in \{-1, 1\} \} \subset E(d)$ . The size of this frame (when it is defined) is $2^d$ which for typical dimensions $d = 2, 3$ amounts to frames of size 4, 8, respectively. For $G = SE(d)$ , we restrict $\mathcal{F}(\pmb{X})$ to orthogonal, positive orientation matrices; generically there are $2^{d-1}$ such elements, which amounts to 2, 4 elements for $d = 2, 3$ , respectively. + +Proposition 1. $\mathcal{F}(\pmb {X})$ based on the covariance and centroid are $E(d)$ equivariant and bounded. + +This choice of frame is defined for every $X \in V$ for which the covariance matrix $C$ has simple spectrum (i.e., non-repeating eigenvalues). It is known that within symmetric matrices, those with repeating eigenvalues are of co-dimension 2 (see e.g., Breiding et al. (2018)). Therefore, $\mathcal{F}(\boldsymbol{X})$ is defined for almost all $\mathcal{X}$ except rare singular points. Where it is defined, $\mathcal{F}$ is continuous as a direct consequence of perturbation theory of eigenvalues and eigenvectors of normal matrices (see e.g., Theorem 8.1.12 in Golub & Van Loan (1996)). However, when very close to repeating eigenvalues, small perturbation can lead to large frame change. In Appendix B we present an empirical study of frame stability and likelihood of encountering repeating eigenvalues in practice. + +Since we would like to incorporate $E(d)$ symmetries to an already $S_{n}$ invariant/equivariant architectures, per Theorem 2, we will also need to show that the $\rho_{1}$ (and similarly $\rho_{2}$ ) commute with $\tau : S_{n} \to \mathrm{GL}(V)$ defined by $\tau(h)\mathbf{X} = \mathbf{P}\mathbf{X}$ , where $\mathbf{P} = \mathbf{P}_{h}$ , $h \in S_{n}$ , is the permutation representation. That is $\mathbf{P}_{h} \in \mathbb{R}^{n \times n}$ is the permutation matrix representing $h \in S_{n}$ , that is, $\mathbf{P}_{ij} = 1$ if $i = h(j)$ and 0 otherwise. Indeed $\tau(h)\rho_{1}(g)\mathbf{X} = \mathbf{P}(\mathbf{X}\mathbf{R}^{T} + \mathbf{1}\mathbf{t}^{T}) = \rho_{1}(g)\tau(h)\mathbf{X}$ . Furthermore, note that $\mathcal{F}(\tau(h)\mathbf{X}) = \mathcal{F}(\mathbf{X})$ , therefore $\mathcal{F}$ is also $S_{n}$ invariant. + +Backbone architectures. We incorporate the FA framework with two existing popular point cloud network layers: i) PointNet (Qi et al., 2017a); and ii) DGCNN (Wang et al., 2018). We denote both architectures by $\Phi_{d,d'}: \mathbb{R}^{n \times d} \to \mathbb{R}^{n \times d'}$ for the $S_n$ equivariant version of these models. To simplify the discussion, we omit particular choices of layers and feature dimensions, Appendix C.1 provides the full details. We experimented with two design choices: First, consider frame averaged $\Phi_{3,3}$ , i.e., $\langle \Phi_{3,3} \rangle_{\mathcal{F}}$ , yielding a universal, $E(3)$ equivariant versions for PointNet and DGCNN, dubbed FA-PointNet and FA-DGCNN. A more complex design choice, taking inspiration from deep architectures with multiple modules, is to compose blocks of FA networks. For example, to build a local $E(3)$ equivariant version of PointNet, denote also $\Upsilon_{d,d'}: \mathbb{R}^d \to \mathbb{R}^{d'}$ an MLP. We decompose the input point cloud to $k$ -nn patches $\pmb{X}_i \in \mathbb{R}^{k \times 3}$ , $i \in [n]$ , where each patch is sorted by distance. Next, feed each patch into an equivariant FA $\Upsilon_{3k,3d}$ , each with its own frame $\mathcal{F}_i$ , resulting in $E(3)$ equivariant features in $\mathbb{R}^{3d}$ for every point, $\pmb{Y} \in \mathbb{R}^{n \times 3d}$ ; then applying equivariant FA $\Phi_{3d,3}$ providing output in $\mathbb{R}^{n \times 3}$ . That is, $\Psi(\pmb{X}) = \langle \Phi_{3d,3} \rangle_{\mathcal{F}}\left(\left[\langle \Upsilon_{3k,3d} \rangle_{\mathcal{F}_1}(\pmb{X}_1), \ldots, \langle \Upsilon_{3k,3d} \rangle_{\mathcal{F}_n}(\pmb{X}_n)\right]\right)$ , where brackets denote concat in the first dimension. We name this construction FA-Local-PointNet. + +Universality. We use Theorem 4 to prove that using a universal set invariant/equivariant backbone $\phi, \Phi$ , such as DeepSets or PointNet (see e.g., Zaheer et al. (2017); Qi et al. (2017a); Segol & Lipman (2019)) leads to a universal model. Let $\mathcal{H}$ be any such universal set-equivariant function collection. That is, for arbitrary continuous set function $\Psi$ we have (in the notation of Section 2.2) $\inf_{\Phi \in \mathcal{H}} \| \Psi - \Phi \|_{\Omega, W} = 0$ for arbitrary compact sets $\Omega \subset V$ . If $K \subset V$ is some bounded domain, then the choice of frame $\mathcal{F}$ described above implies that $K_{\mathcal{F}}$ is also bounded and therefore contained in some compact set $\Omega \subset V$ . Consequently, Proposition 1 and Theorem 4 imply Corollary 2. A similar results hold for $SE(d)$ , which provides a similar expressiveness guarantee as the one from (Dym & Maron, 2020) analyzing Tensor Field Networks (Thomas et al., 2018; Fuchs et al., 2020). + +Corollary 2 (FA-DeepSets/PointNet are universal). Frame Averaging DeepSets/PointNet using the frame $\mathcal{F}$ defined above results in a universal $E(d)\times S_{n}$ invariant/equivariant model over bounded frame-finite sets $K\subset V$ + +# 3.2 GRAPHS, PERMUTATIONS + +Symmetry and frame. Let $G = S_{n}$ , and $V = \mathbb{R}^{n\times d}\times \mathbb{R}^{n\times n}$ , where $\mathbf{X} = (\mathbf{Y},\mathbf{A})\in V$ represents a set of node features $\mathbf{Y}\in \mathbb{R}^{n\times d}$ , and an adjacency matrix (or some edge attributes) $\mathbf{A}\in \mathbb{R}^{n\times n}$ ; we assume undirected graphs, meaning $\mathbf{A} = \mathbf{A}^T$ . Let $\mathbf{X}\in V$ , the representation $\rho_{1}$ is defined by $\rho_{1}(g)\mathbf{X} = (PY,PAP^{T})$ , where $P = P_{g}$ is the permutation matrix representing $g\in S_{n}$ . We define $\mathcal{F}(\mathbf{X})$ to contain all $g\in S_{n}$ that sort the rows of the matrix $S(\mathbf{X})$ in column lexicographic manner, i.e., $PS(\mathbf{X})$ is lexicographically sorted; the matrix $S$ is defined as follows. Let $L = \mathrm{diag}(A1) - A$ be the graph's Laplacian. For every eigenspace (traversed in increasing eigenvalue order), spanned by the orthogonal basis $u_{1},\ldots ,u_{k}$ , we add the (equivariant) column $\mathrm{diag}(\sum_{i = 1}^{k}\pmb {u}_{i}\pmb{u}_{i}^{T})$ to $S$ (Fürer, 2010). Hence, the number of columns of $S$ equals to the number of unique eigenvalues of $L$ . + +Proposition 2. $\mathcal{F}(\mathbf{X})$ defined by sorting of $S(\mathbf{X})$ is $S_{n}$ -equivariant and bounded. + +Backbone architectures. In this case we perform only invariant tasks, for which we chose two universal backbone architectures for $\phi$ : (i) MLP applied to $\operatorname{vec}(\mathbf{Y}, \mathbf{A})$ and (ii) GNN+ID (Murphy et al., 2019; Loukas, 2020), denoting a GNN backbone equipped with node identifiers as node features. We perform FA with $\phi$ according to the frame constructed in Proposition 2. Note that Theorem 3 implies, $|\mathcal{F}(X)| \geq |\operatorname{Aut}(G)|$ , since the stabilizer $G_{\mathbf{X}}$ is the automorphism group of the graph, i.e., $g \in G_{\mathbf{X}} = \operatorname{Aut}(\mathbf{X})$ iff $\rho_1(g)\mathbf{X} = \mathbf{X}$ . This means that for symmetric graphs equation 2 can prove too costly. In this case, we use the approximate invariant FA, equation 7. + +Universality. Let us use Theorem 4 to prove FA-MLP and FA-GNN+ID are universal. Again, it is enough to consider the equivariant case. Let $\mathcal{H} \subset \mathcal{C}(V, W)$ denote the collection of functions that can be represented by MLPs or GNN+ID. Universality results in (Pinkus, 1999; Loukas, 2020; Puny et al., 2020) imply that for any continuous graph (i.e., $S_{n}$ equivariant) function $\Psi$ , $\inf_{\Phi \in \mathcal{H}} \|\Psi - \Phi\|_{\Omega, W} = 0$ for any compact $\Omega \subset V$ . Let $K \subset V$ be some bounded domain, then since $G$ is finite then $K_{\mathcal{F}}$ is also bounded and is contained in some compact set. Proposition 2 and Theorem 4 now imply: + +Corollary 3 (FA-MLP and FA-GNN+ID is graph universal). Frame Averaging MLP/GNN+ID using the frame $\mathcal{F}$ above results in a universal $S_{n}$ equivariant graph model over bounded domains $K\subset V$ + +# 3.3 GRAPHS, EUCLIDEAN MOTIONS. + +Symmetry and frame. We consider the group $G = E(d)$ acting on graphs, i.e., $V = \mathbb{R}^{n\times d}\times \mathbb{R}^{n\times n}$ , where $\mathbf{X} = (\mathbf{Y},\mathbf{A})\in V$ represents a set of node and edge attributes, as described above. The group representation is $\rho_{1}(g)\mathbf{X} = \rho_{1}(g)(\mathbf{Y},\mathbf{A}) = (\mathbf{Y}\mathbf{R}^{T} + \mathbf{1}t^{T},\mathbf{A})$ . We define the frame $\mathcal{F}(\mathbf{X}) = \mathcal{F}(\mathbf{Y})$ using the node features as in the point cloud case, Section 3.1. Therefore Proposition 1 implies $\mathcal{F}$ is equivariant and bounded. Next, also in this case we would like to incorporate $E(d)$ symmetries to an already $S_{n}$ invariant/equivariant graph neural network architectures; again per Theorem 2, we will also need to show that the $\rho_{1}$ (and similarly $\rho_{2}$ ) commutes with $\tau :S_n\to \mathrm{GL}(V)$ defined by $\tau (h)\mathbf{X} = (\mathbf{PY},\mathbf{PAP}^T)$ , where $P = P_{h}$ is the permutation matrix of $h\in S_n$ . Indeed $\tau (h)\rho_1(g)\mathbf{X} = (\mathbf{P}(\mathbf{Y}\mathbf{R}^T +\mathbf{1}t^T),\mathbf{P}\mathbf{A}\mathbf{P}^T) = \rho_1(g)\tau (h)\mathbf{X}$ . Furthermore, note that as in the point cloud case $\mathcal{F}(\tau (h)\mathbf{X}) = \mathcal{F}(\mathbf{X})$ , therefore $\mathcal{F}$ is also $S_{n}$ invariant. + +Backbone architecture. The backbone architecture we chose for this instantiation is the Message Passing GNN in Gilmer et al. (2017), an $S_{n}$ equivariant model denoted $\Phi_{d,d'}: \mathbb{R}^{n\times d} \times \mathbb{R}^{n\times n} \to \mathbb{R}^{n\times d'} \times \mathbb{R}^{n\times n}$ . In this case we constructed a model, as suggested above, by composing $l$ equivariant layers $\Psi(\mathbf{X}) = \langle \Phi_{3d',3}^{(l)}\rangle_{\mathcal{F}} \circ \dots \langle \Phi_{3d',3d'}^{(i)}\rangle_{\mathcal{F}} \circ \dots \circ \langle \Phi_{6,3d'}^{(1)}\rangle_{\mathcal{F}}(\mathbf{X})$ . The input feature size is 6 since we use velocities in addition to initial position as input ( $n$ -body problem). We name this model FA-GNN. + +# 4 PREVIOUS WORKS + +Rotation invariant and equivariant point networks. State of the art $S_{n}$ invariant networks, e.g., (Qi et al., 2017a;b; Atzmon et al., 2018; Li et al., 2018; Xu et al., 2018b; Wang et al., 2018) are not invariant/equivariant to rotations/reflections by construction (Chen et al., 2019). Invariance to global or local rotations can be achieved by modifying the 3D convolution operator or modifying the input representation. Relative angles and distances across points (Deng et al., 2018; Zhang et al., 2019) or angles and distances w.r.t. normals (Gojcic et al., 2019) can be used for rotation invariance. Other works use some local or global frames to achieve invariance to rotations and translations. Xiao et al. (2020); Yu et al. (2020); Deng et al. (2018) also use PCA to define rotation invariance, and can be seen as instances of the FA framework. We augment this line of work by introducing a more general framework that includes equivariance to rotation/reflection and translation, more general architectures and symmetries as well as theoretical analysis of the expressive power of such models. + +Equivalence is a desirable property for 3D recognition, registration (Ao et al., 2021), and other domains such as complex physical systems (Kondor, 2018). A popular line of work utilizes the theory of spherical harmonics to achieve equivariance (Worrall et al., 2017; Esteves et al., 2018; Liu et al., 2018; Weiler et al., 2018; Cohen et al., 2018). Notably, Tensor Field Networks (TFN), $SE(3)$ transformers, and Group-Equivariant Attention (Thomas et al., 2018; Fuchs et al., 2020; Romero & Cordonnier, 2021) achieve equivariance to both translation and rotation, i.e., $SE(3)$ , and are maximally expressive (i.e., universal) as shown in Dym & Maron (2020). These methods, however, are specifically adapted to $SE(3)$ and require high order $SO(3)$ representations as features. Recently Deng et al. (2021) propose a rotation equivariant network by introducing tensor features, linear layers that act equivariantly on them and equivariant non-linearities etc. However, their architecture is not proved to be universal. Discrete convolutions (Cohen et al., 2019; Li et al., 2019; Worrall & Brostow, 2018) have also been used for achieving equivariance. In particular, Chen et al. (2021) propose point networks that are $SE(3)$ equivariant and use separable discrete convolutions. Lastly, (Finzi et al., 2020) construct equivariant layers using local group convolution, and extends beyond rotations to any Lie group. + +Graph neural networks. Message passing (MP) GNNs (Gilmer et al., 2017) are designed to be $S_{n}$ equivariant. Kondor et al. (2018) introduces a broader set of equivariant operators in MP-GNNs, while Maron et al. (2018) provides a full characterization of linear permutation invariant/equivariant GNN layers. In a parallel approach, trying to avoid harming expressively due to restricted architectures (Xu et al., 2018a; Morris et al., 2019), other works suggested symmetrization of non invariant/equivariant backbones. Ranging from eliminating all symmetries by a canonical ordering (Niepert et al., 2016) to averaging over the entire symmetry group (Murphy et al., 2019; 2018), which amount to the trivial frame $\mathcal{F} \equiv \rho(G)$ , with the symmetry group $G = S_{n}$ . As we have also shown, this approach comes at a cost of high variance approximations hindering the learning process. Our FA framework reduces the variance both by choosing a canonical ordering and addressing the fact that it may not be unique. + +Recently, a body of work studies GNNs with invariance/equivalence to $E(3)$ (or a similar group) to deal with symmetries in molecular data or dynamical systems. Many $SE(3)$ equivariant con + +structures (Anderson et al., 2019; Fuchs et al., 2020; Batzner et al., 2021; Klicpera et al., 2021) extend TFN (Thomas et al., 2018) and inherit its expensive higher order feature representations in the intermediate layers. Finally, a recent work by Satorras et al. (2021) provides an efficient message passing construction which is $E(d)$ equivariance but is not shown to be universal, thus far. + +# 5 EXPERIMENTS + +We evaluate our FA framework on a few invariant/equivariant point cloud and graph learning tasks: point cloud normal estimation $(O(3)$ equivariant and translation invariant); graph separation tasks $(S_{n}$ invariant); and particles position estimation, i.e., the $n$ -body problem $(E(3)$ equivariant). + +# 5.1 POINT CLOUDS: NORMAL ESTIMATION + +Normal estimation is a core geometry processing task, where the goal is to estimate normal data from an input point cloud, $\mathbf{X} \in \mathbb{R}^{n \times 3}$ . This task is $O(3)$ equivariant and translation invariant. To test the effect of rotated data on the different models we experimented with three different settings: i) $I / I$ training and testing on the original data; ii) $I / SO(3)$ - training on the original data and testing on randomly rotated data; and iii) $SO(3) / SO(3)$ - training and testing with randomly rotated data. We used the ABC dataset (Koch et al., 2019) that contains 3 collections (10k, 50k, and 100k models each) of Computer-Aided Design (CAD) models. We follow the protocol of the benchmark suggested in Koch et al. (2019), and quantitatively measure normal estimation quality via $1 - (\boldsymbol{n}^T\hat{\boldsymbol{n}})^2$ , with $\boldsymbol{n}$ being the ground truth normal and $\hat{\boldsymbol{n}}$ the normal prediction. We used the same random train/test splits from Koch et al. (2019). For baselines, we chose the PointNet (Qi et al., 2017a) and DGCNN (Wang et al., 2018) architectures, which are popular permutation equivariant 3D point cloud architectures. In addition, we also compare to VN-PointNet and VN-DGCNN (Deng et al., 2021), a recent state of the art approach for $SO(3)$ equivariant network design. We also tested our FA models, FA-PointNet and FA-DGCNN as described in Section 3.1. In addition, we tested our local $O(3)$ equivariant model, FA-Local-PointNet. See Appendix C.1 for the further implementation details. The results in Table 1 showcase the advantages of our FA framework: Incorporating FA to existing architectures is beneficial in scenarios (ii-iii), outperforming augmentation by a large margin. In contrast to VN models, FA models maintain (and in some cases even improve) the baseline estimation quality (i), attributed to the expressive power of the FA models. + +
Method10k50k100k
I/II/SO(3)SO(3)/SO(3)I/II/SO(3)SO(3)/SO(3)I/II/SO(3)SO(3)/SO(3)
PointNet.207±.004.449±.006.258±.002.188±.002.430±.007.232±.001.188±.006.419±.006.231±.001
VN-PointNet.215±.003.216±.004.223±.004.185±.002.186±.006.187±.006.189±.004.188±.007.185±.003
FA-PointNet.158±.001.163±.001.161±.002.148±.001.148±.003.150±.002.148±.002.147±.001.149±.001
FA-Local-PointNet.097±.001.098±.001.098±.001.091±.001.090±.001.091±.001.091±.001.090±.002.091±.002
DGCNN.070±.003.193±.015.121±.001.061±.004.174±.007.122±.001.058±.002.173±.003.112±.001
VN-DGCNN.133±.003.130±.001.144±.007.127±.005.125±.001.127±.006.127±.005.125±.001.127±.005
FA-DGCNN.067±.001.069±.002.071±.003.065±.001.067±.004.068±.004.073±.008.067±.001.071±.009
+ +# 5.2 GRAPHS: EXPRESSIVE POWER + +Producing GNNs that are both expressive and computationally tractable is a long standing goal of the graph learning community. In this experiment we test graph separation ( $S_{n}$ invariant task): the ability of models to separate and classify graphs, a basic trait for graph learning. We use two datasets: GRAPH8c (Balcilar et al., 2021) that consists of all non-isomorphic, connected 8 node graphs; and EXP (Abboud et al., 2021) that consists of 3-WL distinguishable graphs that are not 2-WL distinguishable. There are two tasks: (i) count pairs of graphs not separated by a randomly initialized model in GRAPH8c and EXP; and (ii) learn to classify EXP to two classes. We follow Balcilar et al. (2021) experimental setup. As baselines we use GCN Kipf & Welling (2016), GAT Velickovic et al. (2018), GIN Xu et al. (2018a), CHEBNET Tang et al. + +Table 1: Normal estimation, ABC dataset (Koch et al., 2019) benchmark. + +
ModelGRAPH8cEXPEXP-classify
GCN475560050%
GAT182860050%
GIN38660050%
CHEBNET447182%
PPGN00100%
GNNML300100%
GA-MLP0050%
FA-MLP00100%
GA-GIN+ID0050%
FA-GIN+ID00100%
+ +Table 2: Graph separation (Balcilar et al., 2021; Abboud et al., 2021). + +(2019), PPGN Maron et al. (2019), and GNNML3 Balcilar et al. (2021), all of which are equivariant by construction. We compare to our FA-MLP, and FA-GIN+ID, as described in Section 3.2 that provide tractable option for universal GNNs. We also compare to the trivial (i.e., entire group) frame averaging $\mathcal{F} \equiv G$ , denoted GA-MLP and GA-GIN+ID as advocated in (Murphy et al., 2019). Appendix C.2 provides full implementation details. + +The two first columns in Table 2 show the results of the separation task (i). As expected both FA and GA provide perfect separation, however, are not perfectly invariant unless one computes the full averages on $\mathcal{F}$ and $G$ , respectively. Since for some graphs (with many symmetries) full averages are not feasible, we compare the invariance error (see appendix for the exact definition) of GA with that of the approximate FA, equation 7, for 50 randomly permuted inputs. Figure 2-left shows the mean, std, and $90\%$ percentile of invariance error for increasing sample size $k$ . Note that approximate FA is more invariant + +![](images/d8c9d35c1202d04d85ecb7bbaf89c4a97b359aba9751110732a4c84983d1cdd2.jpg) +Figure 2: Invariance error (left); $m_{\mathcal{F}} - m_{G}$ (right). + +![](images/124d350d3b060fb357b706e943a9f7678b8fe4cba197e62eddfcbdcc9f151efb.jpg) + +even with as little as $k = 1$ samples, which explains why in the classification task (ii), in Table 2, right column, FA is able to learn while GA fails, when both are trained with sample size $k = 1$ . Figure 2-right compares $m_{\mathcal{F}}$ and $m_G$ (the maximal number of unique elements in the FA, equation 6) for GRAPH8c dataset. The color and size of the points in the plot represents how many graphs in the dataset have the corresponding $(m_G, m_{\mathcal{F}})$ values. This demonstrates the benefit (in one example) of FA over GA in view of Theorem 5 and approximation in equation 7. Of course, other, more powerful frames can be chosen, e.g., using higher order WL (Morris et al., 2019) or substructure counting (Bouritsas et al., 2020) that will further improve the approximation of equation 7. + +# 5.3 GRAPHS: $n$ -BODY PROBLEM + +In this task we learn to solve the $n$ -body problem $(E(3)$ equivariant). The dataset, created in (Satorras et al., 2021; Fuchs et al., 2020), consists of a collection of $n = 5$ particles systems, where each particle is equipped with its initial position in $\mathbb{R}^3$ and velocity in $\mathbb{R}^3$ , and each pair of particles (edge) with a value indicating their charge difference. The task is to predict the particles' locations after a fixed time. We follow the protocol of (Satorras et al., 2021). As baselines we use EGNN (Satorras et al., 2021), GNN (Gilmer et al., 2017), TFN (Thomas et al., 2018), SE(3)-Transformer (Fuchs et al., 2020) and Radial Field (Köhler et al., 2019). We test our FA-GNN, as described in Section 3.3. Table 3 logs the results, where the metric reported + +
MethodMSEForward time (s)
Linear0.0819.0001
SE(3) Transformer0.0244.1346
TFN0.0155.0343
GNN0.0107.0032
Radial Field0.0104.0039
EGNN0.0071.0062
FA-GNN0.0057.0041
+ +Table 3: $n$ -body experiment (Satorras et al., 2021). + +is the Mean Squared Error between predicted locations and ground truth. As can be seen, FA-GNN improves over the SOTA by more than $20\%$ . Note that the number of parameters used in FA-GNN and EGNN is roughly the same. More details are provided in Appendix C.3 + +# 6 CONCLUSIONS + +We present Frame Averaging, a generic and principled methodology for adapting existing (backbone) neural architectures to be invariant/equivariant to desired symmetries that appear in the data. We prove the method preserves the expressive power of the backbone model, and is efficient to compute in several cases of interest. We use FA to build universal GNNs, universal point cloud networks that are invariant/equivariant to Euclidean motions $E(3)$ , and GNNs invariant/equivariant to Euclidean motions $E(3)$ . We empirically validate the effectiveness of these models on several invariant/equivariant learning tasks. We believe the instantiations presented in the paper are only the first step in exploring the full potential of the FA framework, and there are many other symmetries and scenarios that can benefit from FA. For example, extending the invariance (or equivariance) of a model from a subgroup $H$ of $G$ , to $G$ . Further Interesting open questions, not answered by this paper are: What would be a way to systematically find efficient/small frames? How the frame choice effects the learning process? How to explore useful FA architectures and modules? + +# ACKNOWLEDGMENTS + +OP, MA and HB were supported by the European Research Council (ERC Consolidator Grant, "LiftMatch" 771136) and also by a research grant from the Carolito Stiftung (WAIC). + +# REFERENCES + +Ralph Abboud, Ismail Ilkan Ceylan, Martin Grohe, and Thomas Lukasiewicz. The surprising power of graph neural networks with random node initialization. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI), 2021. +Brandon Anderson, Truong-Son Hy, and Risi Kondor. Cormorant: Covariant molecular neural networks. arXiv preprint arXiv:1906.04015, 2019. +Sheng Ao, Qingyong Hu, Bo Yang, Andrew Markham, and Yulan Guo. Spinnet: Learning a general surface descriptor for 3d point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11753-11762, 2021. +Matan Atzmon, Haggai Maron, and Yaron Lipman. Point convolutional neural networks by extension operators. arXiv preprint arXiv:1803.10091, 2018. +Muhammet Balcilar, Pierre Héroux, Benoit Gauzère, Pascal Vasseur, Sébastien Adam, and Paul Honeine. Breaking the limits of message passing graph neural networks. In Proceedings of the 38th International Conference on Machine Learning (ICML), 2021. +Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. +Simon Batzner, Tess E Smidt, Lixin Sun, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, and Boris Kozinsky. Se (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. arXiv preprint arXiv:2101.03164, 2021. +Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. arXiv preprint arXiv:2006.09252, 2020. +Paul Breiding, Khazhgali Kozhasov, and Antonio Lerario. On the geometry of the set of symmetric matrices with repeated eigenvalues. Arnold Mathematical Journal, 4(3):423-443, 2018. +Chao Chen, Guanbin Li, Ruijia Xu, Tianshui Chen, Meng Wang, and Liang Lin. Clusternet: Deep hierarchical cluster network with rigorously rotation-invariant representation for point cloud analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4994-5002, 2019. +Haiwei Chen, Shichen Liu, Weikai Chen, Hao Li, and Randall Hill. Equivariant point network for 3d point cloud analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14514-14523, 2021. +Taco Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral cnn. In International Conference on Machine Learning, pp. 1321-1330. PMLR, 2019. +Taco S Cohen, Mario Geiger, Jonas Kohler, and Max Welling. Spherical cnns. In International Conference on Learning Representations, 2018. +Amir Dembo and Ofer Zeitouni. Large Deviations Techniques and Applications, volume 38 of Stochastic Modelling and Applied Probability. Springer Berlin/Heidelberg, Berlin/Heidelberg, 2010. ISBN 9783642033100. + +Congyue Deng, Or Litany, Yueqi Duan, Adrien Poulenard, Andrea Tagliasacchi, and Leonidas Guibas. Vector neurons: A general framework for so (3)-equivariant networks. arXiv preprint arXiv:2104.12229, 2021. +Haowen Deng, Tolga Birdal, and Slobodan Ilic. Ppf-foldnet: Unsupervised learning of rotation invariant 3d local descriptors. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 602-618, 2018. +Nadav Dym and Haggai Maron. On the universality of rotation equivariant point cloud networks. arXiv preprint arXiv:2010.02449, 2020. +Carlos Esteves, Christine Allen-Blanchette, Ameesh Makadia, and Kostas Daniilidis. Learning so (3) equivariant representations with spherical cnns. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 52-68, 2018. +Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data, 2020. +Fabian B Fuchs, Daniel E Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: 3d roto-translation equivariant attention networks. arXiv preprint arXiv:2006.10503, 2020. +William Fulton and Joe Harris. Representation theory: a first course, volume 129. Springer Science & Business Media, 2013. +Martin Fürer. On the power of combinatorial and spectral invariants. Linear algebra and its applications, 432(9):2373-2380, 2010. +Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pp. 1263-1272. PMLR, 2017. +Zan Gojcic, Caifa Zhou, Jan D Wegner, and Andreas Wieser. The perfect match: 3d point cloud matching with smoothed densities. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5545-5554, 2019. +Gene H Golub and Charles F Van Loan. Matrix computations. edition, 1996. +Weihua Hu, Muhammed Shuaibi, Abhishek Das, Siddharth Goyal, Anuroop Sriram, Jure Leskovec, Devi Parikh, and C Lawrence Zitnick. Forcenet: A graph neural network for large-scale quantum calculations. arXiv preprint arXiv:2103.01436, 2021. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. +Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. +Johannes Klicpera, Florian Becker, and Stephan Gunnemann. Gemnet: Universal directional graph neural networks for molecules. arXiv preprint arXiv:2106.08903, 2021. +Sebastian Koch, Albert Matveev, Zhongshi Jiang, Francis Williams, Alexey Artemov, Evgeny Burnaev, Marc Alexa, Denis Zorin, and Daniele Panozzo. Abc: A big cad model dataset for geometric deep learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. +Risi Kondor. N-body networks: a covariant hierarchical neural network architecture for learning atomic potentials. arXiv preprint arXiv:1803.01588, 2018. +Risi Kondor, Hy Truong Son, Horace Pan, Brandon Anderson, and Shubhendu Trivedi. Covariant compositional networks for learning graphs. arXiv preprint arXiv:1801.02144, 2018. +Jonas Kohler, Leon Klein, and Frank Noé. Equivariant flows: sampling configurations for multi-body systems with symmetric energies, 2019. + +Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. +David A Levin and Yuval Peres. Markov chains and mixing times, volume 107. American Mathematical Soc., 2017. +Jiaxin Li, Yingcai Bi, and Gim Hee Lee. Discrete rotation equivariance for point cloud recognition. In 2019 International Conference on Robotics and Automation (ICRA), pp. 7269-7275. IEEE, 2019. +Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. Pointcnn: Convolution on x-transformed points. Advances in neural information processing systems, 31:820-830, 2018. +Min Liu, Fupin Yao, Chiho Choi, Ayan Sinha, and Karthik Ramani. Deep learning 3d shapes using alt-az anisotropic 2-sphere convolution. In International Conference on Learning Representations, 2018. +Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440, 2015. +Andreas Loukas. What graph neural networks cannot learn: depth vs width. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=B112bp4YwS. +Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. arXiv preprint arXiv:1812.09902, 2018. +Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. arXiv preprint arXiv:1905.11136, 2019. +Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 4602-4609, 2019. +Ryan Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Relational pooling for graph representations. In International Conference on Machine Learning, pp. 4663-4673. PMLR, 2019. +Ryan L Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs. arXiv preprint arXiv:1811.01900, 2018. +Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In Maria Florina Balcan and Kilian Q. Weinberger (eds.), Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pp. 2014-2023, New York, New York, USA, 20-22 Jun 2016. PMLR. URL https://proceedings.mlr.press/v48/niepert16.html. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32: 8026-8037, 2019. +Allan Pinkus. Approximation theory of the mlp model in neural networks. Acta numerica, 8:143-195, 1999. +Yury Polyanskiy and Yihong Wu. Lecture notes on information theory. Lecture Notes for ECE563 (UIUC) and, 6(2012-2016):7, 2014. +Omri Puny, Heli Ben-Hamu, and Yaron Lipman. Global attention improves graph networks generalization, 2020. + +Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652-660, 2017a. +Charles R Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv:1706.02413, 2017b. +David W Romero and Jean-Baptiste Cordonnier. Group equivariant stand-alone self-attention for vision. In ICLR, 2021. +Akiyoshi Sannai, Makoto Kawano, and Wataru Kumagai. Equivariant and invariant reynolds networks, 2021. +Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. arXiv preprint arXiv:2102.09844, 2021. +Nimrod Segol and Yaron Lipman. On universal equivariant set networks. arXiv preprint arXiv:1910.02421, 2019. +Muhammed Shuaibi, Adeesh Kolluru, Abhishek Das, Aditya Grover, Anuroop Sriram, Zachary Ulissi, and C Lawrence Zitnick. Rotation invariant graph neural networks using spin convolutions. arXiv preprint arXiv:2106.09575, 2021. +Shanshan Tang, Bo Li, and Haijun Yu. Chebnet: Efficient and stable constructions of deep neural networks with rectified power units using chebyshev approximations, 2019. +Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018. +Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks, 2018. +Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds.(2018). arXiv preprint arXiv:1801.07829, 2018. +Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. arXiv preprint arXiv:1807.02547, 2018. +Daniel Worrall and Gabriel Brostow. *Cubenet: Equivalence to 3d rotation and translation.* In Proceedings of the European Conference on Computer Vision (ECCV), pp. 567-584, 2018. +Daniel E Worrall, Stephan J Garbin, Daniyar Turmukhambetov, and Gabriel J Brostow. Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5028-5037, 2017. +Zelin Xiao, Hongxin Lin, Renjie Li, Lishuai Geng, Hongyang Chao, and Shengyong Ding. Endowing deep 3d models with rotation invariance based on principal component analysis. In 2020 IEEE International Conference on Multimedia and Expo (ICME), pp. 1-6. IEEE, 2020. +Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018a. +Yifan Xu, Tianqi Fan, Mingye Xu, Long Zeng, and Yu Qiao. SpiderCNN: Deep learning on point sets with parameterized convolutional filters. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 87-102, 2018b. +Dmitry Yarotsky. Universal approximations of invariant maps by neural networks. Constructive Approximation, pp. 1-68, 2021. +Ruixuan Yu, Xin Wei, Federico Tombari, and Jian Sun. Deep positional and relational feature learning for rotation-invariant point cloud analysis. In European Conference on Computer Vision, pp. 217-233. Springer, 2020. + +Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan Salakhutdinov, and Alexander Smola. Deep sets. arXiv preprint arXiv:1703.06114, 2017. +Zhiyuan Zhang, Binh-Son Hua, David W Rosen, and Sai-Kit Yeung. Rotation invariant convolutions for 3d point clouds deep learning. In 2019 International Conference on 3D Vision (3DV), pp. 204-213. IEEE, 2019. + +# A PROOFS + +# A.1 PROOF OF THEOREM 1 + +Proof. First note that frame equivariance is defined to be $\mathcal{F}(\rho_1(g)X) = g\mathcal{F}(X)$ which in particular means $|\mathcal{F}(\rho_1(g)X)| = |\mathcal{F}(X)|$ . + +For invariance, let $g^{\prime}\in G$ + +$$ +\begin{array}{l} \langle \phi \rangle_ {\mathcal {F}} (\rho_ {1} (g ^ {\prime}) X) = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in g ^ {\prime} \mathcal {F} (X)} \phi (\rho_ {1} (g) ^ {- 1} \rho_ {1} (g ^ {\prime}) X) = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in \mathcal {F} (X)} \phi (\rho_ {1} (g ^ {\prime} g) ^ {- 1} \rho_ {1} (g ^ {\prime}) X) \\ = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in \mathcal {F} (X)} \phi \left(\rho_ {1} (g) ^ {- 1} X\right) = \langle \phi \rangle_ {\mathcal {F}} (X) \\ \end{array} +$$ + +and for equivariance + +$$ +\begin{array}{l} \langle \Phi \rangle_ {\mathcal {F}} \left(\rho_ {1} \left(g ^ {\prime}\right) X\right) = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in g ^ {\prime} \mathcal {F} (X)} \rho_ {2} (g) \Phi \left(\rho_ {1} (g) ^ {- 1} \rho_ {1} \left(g ^ {\prime}\right) X\right) \\ = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in \mathcal {F} (X)} \rho_ {2} \left(g ^ {\prime} g\right) \Phi \left(\rho_ {1} \left(g ^ {\prime} g\right) ^ {- 1} \rho_ {1} \left(g ^ {\prime}\right) X\right) \\ = \rho_ {2} \left(g ^ {\prime}\right) \frac {1}{\left| \mathcal {F} (X) \right|} \sum_ {g \in \mathcal {F} (X)} \rho_ {2} (g) \Phi \left(\rho_ {1} (g) ^ {- 1} X\right) \\ = \rho_ {2} (g ^ {\prime}) \left\langle \Phi \right\rangle_ {\mathcal {F}} (X) \\ \end{array} +$$ + +![](images/6e711c7cef211f3e000d02f62a88093c80208f23624479dbb59628d346a82872.jpg) + +# A.2 PROOF OF THEOREM 2 + +Proof. The proof above A.1, of Theorem 1, is a special case of the following where the second symmetry is chosen to be trivial. In principle the proofs are quite similar. + +First note that frame equivariance and invariance, together with $\rho_{1},\tau_{1}$ commuting mean + +$$ +\mathcal {F} \left(\gamma_ {1} (h, g) X\right) = \mathcal {F} \left(\tau_ {1} (h) \rho_ {1} (g) X\right) = \mathcal {F} \left(\rho_ {1} (g) \tau_ {1} (h) X\right) = g \mathcal {F} \left(\tau_ {1} (h) X\right) = g \mathcal {F} (X), +$$ + +which in particular implies that $|\mathcal{F}(\gamma_1(h,g)X)| = |\mathcal{F}(X)|$ . Let $(h',g') \in H \times G$ be arbitrary. Then, + +$$ +\begin{array}{l} \left\langle \phi \right\rangle_ {\mathcal {F}} \left(\gamma_ {1} \left(h ^ {\prime}, g ^ {\prime}\right) X\right) = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in g ^ {\prime} \mathcal {F} (X)} \phi \left(\rho_ {1} (g) ^ {- 1} \tau_ {1} \left(h ^ {\prime}\right) \rho_ {1} \left(g ^ {\prime}\right) X\right) \\ = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in \mathcal {F} (X)} \phi \left(\rho_ {1} \left(g ^ {\prime} g\right) ^ {- 1} \tau_ {1} \left(h ^ {\prime}\right) \rho_ {1} \left(g ^ {\prime}\right) X\right) \\ = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in \mathcal {F} (X)} \phi \left(\rho_ {1} (g) ^ {- 1} \tau_ {1} \left(h ^ {\prime}\right) X\right) \\ = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in \mathcal {F} (X)} \phi \left(\rho_ {1} (g) ^ {- 1} X\right) \\ = \left\langle \phi \right\rangle_ {\mathcal {F}} (X) \\ \end{array} +$$ + +meaning that $\langle \phi \rangle_{\mathcal{F}}$ is $H\times G$ invariant. Next, + +$$ +\begin{array}{l} \left\langle \Phi \right\rangle_ {\mathcal {F}} \left(\gamma_ {1} \left(h ^ {\prime}, g ^ {\prime}\right) X\right) = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in g ^ {\prime} \mathcal {F} (X)} \rho_ {2} (g) \Phi \left(\rho_ {1} (g) ^ {- 1} \tau_ {1} \left(h ^ {\prime}\right) \rho_ {1} \left(g ^ {\prime}\right) X\right) \\ = \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in \mathcal {F} (X)} \rho_ {2} \left(g ^ {\prime} g\right) \Phi \left(\rho_ {1} \left(g ^ {\prime} g\right) ^ {- 1} \tau_ {1} \left(h ^ {\prime}\right) \rho_ {1} \left(g ^ {\prime}\right) X\right) \\ = \frac {1}{\left| \mathcal {F} (X) \right|} \sum_ {g \in \mathcal {F} (X)} \rho_ {2} \left(g ^ {\prime} g\right) \Phi \left(\rho_ {1} (g) ^ {- 1} \tau_ {1} \left(h ^ {\prime}\right) X\right) \\ = \tau_ {2} \left(h ^ {\prime}\right) \rho_ {2} \left(g ^ {\prime}\right) \frac {1}{\left| \mathcal {F} (X) \right|} \sum_ {g \in \mathcal {F} (X)} \rho_ {2} (g) \Phi \left(\rho_ {1} (g) ^ {- 1} X\right) \\ = \gamma_ {2} \left(h ^ {\prime}, g ^ {\prime}\right) \langle \Phi \rangle_ {\mathcal {F}} (X) \\ \end{array} +$$ + +showing that $\langle \Phi \rangle_{\mathcal{F}}$ is $H\times G$ equivariant. + +![](images/6ed3cb5f4b89f7d52f17075913278a756a4da8d9ce43486a81bd18b2efd8e2e6.jpg) + +# A.3 PROOF OF THEOREM 3 AND COROLLARY 1 + +Proof. (Theorem 3) First we show $G_{X}$ acts on $\mathcal{F}(X)$ . For an arbitrary $h \in G_{X}$ and equivariant frame $\mathcal{F}$ , we have $\mathcal{F}(X) = \mathcal{F}(\rho_1(h)X) = h\mathcal{F}(X)$ , where in the first equality we used the fact that $h \in G_{X}$ and in the second the equivariance of $\mathcal{F}$ . This means that if $g \in \mathcal{F}(X)$ and $h \in G_{X}$ then also $hg \in \mathcal{F}(X)$ , or in other words $\mathcal{F}(X)$ is closed to actions from $G_{X}$ . Furthermore, since a group acts on itself via bijections we have that the cardinality of all orbits is the same, $|[g]| = |G_Xg| = |G_X|$ , for all $[g] \in \mathcal{F}(X) / G_{X}$ . Lastly, the equivalence relation $g \sim h \Longleftrightarrow hg^{-1} \in G_{X}$ shows that $\mathcal{F}(X)$ is a union of disjoint orbits of equal cardinality. + +Proof. (Corollary 1) Theorem 3 asserts that all orbits $[g] \in \mathcal{F}(X) / G_X$ have the same number of elements. Therefore, a random choice $g \in \mathcal{F}(X)$ will have equal probability to land in each orbit. + +# A.4 THE ROLE OF $m_{\mathcal{F}}$ IN APPROXIMATION QUALITY OF EQUATION 7 + +To better understand the role of $m_{\mathcal{F}}$ in approximating equation 6 we first note that equation 7 can be written as $\langle \langle \phi \rangle \rangle_{\mathcal{F}}(X) = \sum_{[g]\in \mathcal{F}(X) / G_X}\hat{\mu}_{[g]}\phi (\rho_1(g_i)^{-1}X)$ , where $\hat{\mu}$ is the empirical distribution over $\mathcal{F}(X) / G_X$ , assigning to each element $[g]\in \mathcal{F}(X) / G_X$ the fraction of samples landed in $[g]$ , i.e., $\hat{\mu}_{[g]} = |\{i\in [k]|g_i\in [g]\} |k^{-1}$ . We next present a lower bound on the probability of any particular $\hat{\mu}$ that provides a good approximation $\langle \langle \phi \rangle \rangle_{\mathcal{F}}(X)\approx \langle \phi \rangle_{\mathcal{F}}(X)$ . + +Theorem 5. Let $\mathcal{F}$ be an equivariant frame. The probability of an arbitrary $\hat{\mu} \in \left\{\hat{\mu} \mid \sup_{\phi \in \mathcal{Q}} |\langle \phi \rangle_{\mathcal{F}}(X) - \langle \langle \phi \rangle \rangle_{\mathcal{F}}(X)| \leq \epsilon \right\}$ is bounded from below as follows, + +$$ +P (\hat {\mu}) \geq (1 + k) ^ {- m _ {\mathcal {F}}} \exp (- 2 m _ {\mathcal {F}} k \epsilon^ {2}), +$$ + +where $\mathcal{Q} = \{\phi \in \mathcal{C}(V,\mathbb{R})\mid |\phi (X)|\leq 1,\forall X\in V\}$ the set of bounded, continuous functions $V\to \mathbb{R}$ + +Before providing the proof let us note that this theorem provides a lower bound for each particular "good" empirical distribution $\hat{\mu}$ . The main takeoff is that for fixed $k$ and $\epsilon$ , the smaller $m_{\mathcal{F}}$ the better the lower bound. The counter-intuitive behaviour of this bound w.r.t. $k$ and $\epsilon$ stems from the fact that the size of the set of "good" $\hat{\mu}$ , namely $\{\phi \in \mathcal{C}(V, \mathbb{R}) \mid |\phi(X)| \leq 1, \forall X \in V\}$ is increasing with $k$ and $\epsilon$ . + +Proof. For brevity we denote $m = m_{\mathcal{F}}$ . Our setting can be formulated as follows. We have the uniform probability distribution, denoted $\mu$ , over the discrete space $[m]$ (representing the quotient set $\mathcal{F}(X) / G_X$ ); $\mu_j = \frac{1}{m}$ , $j \in [m]$ , and we have numbers $a_j = \phi(\rho_1(h_j)^{-1}X)$ , where $[h_j] \in \mathcal{F}(X) / G_X$ , $j \in [m]$ represent exactly one sample per orbit. In this notation $\langle \phi \rangle_{\mathcal{F}}(X) = \sum_{j=1}^{m} \frac{a_j}{m}$ , while the approximation in equation 7 takes the form $\langle \langle \phi \rangle \rangle_{\mathcal{F}}(X) = \sum_{j=1}^{m} \hat{\mu}_j a_j$ , where $\hat{\mu}_j = k_j / k$ , where $k_j$ is the number of samples $g_i \in [h_j]$ , $i \in [k]$ , i.e., samples that landed in the $j$ -th orbit. Therefore, + +$$ +\sup _ {\phi \in \mathcal {Q}} | \langle \phi \rangle_ {\mathcal {F}} (X) - \langle \langle \phi \rangle \rangle_ {\mathcal {F}} (X) | = \sup _ {| a _ {j} | \leq 1} \left| \sum_ {j = 1} ^ {m} a _ {j} \left(\frac {1}{m} - \hat {\mu} _ {j}\right) \right| = \sum_ {j = 1} ^ {m} \left| \frac {1}{m} - \hat {\mu} _ {j} \right| = 2 \| \mu - \hat {\mu} \| _ {\mathrm {T V}}, +$$ + +where the latter is the total variation norm for discrete measures (Levin & Peres, 2017). We denote $H(\hat{\mu}|\mu) = \sum_{j=1}^{m} \log(\hat{\mu}_j) \log\left(\frac{\hat{\mu}_j}{\mu_j}\right)$ the KL-divergence. The Pinsker inequality and its inverse for discrete positive measures are (see e.g., (Polyanskiy & Wu, 2014)): + +$$ +\frac {1}{2} \| \hat {\mu} - \mu \| _ {\mathrm {T V}} ^ {2} \leq H (\hat {\mu} | \mu) \leq \frac {2}{\alpha} \| \hat {\mu} - \mu \| _ {\mathrm {T V}} ^ {2}, \tag {8} +$$ + +where $\alpha = \min_{j\in [m]}\mu_j = m^{-1}$ . Therefore, + +$$ +\Gamma_ {\epsilon} = \left\{\hat {\mu} \left| \sup _ {\phi \in \mathcal {Q}} \left| \langle \phi \rangle_ {\mathcal {F}} (X) - \langle \langle \phi \rangle \rangle_ {\mathcal {F}} (X) \right| \leq \epsilon \right. \right\} = \left\{\hat {\mu} \left| 2 \| \hat {\mu} - \mu \| _ {\mathrm {T V}} \leq \epsilon \right. \right\} \subset \left\{\hat {\mu} \left| H (\bar {\mu} | \mu) \leq \frac {\epsilon^ {2} m}{2} \right. \right\} +$$ + +Now, application of Large Deviation Theory (Lemma 2.1.9 in Dembo & Zeitouni (2010)) provides that for $\hat{\mu}$ so that $H(\hat{\mu}|\mu) \leq \frac{\epsilon^2 m}{2}$ : + +$$ +P (\hat {\mu}) \geq \frac {1}{(1 + k) ^ {m}} e ^ {- \frac {k m \epsilon^ {2}}{2}}. +$$ + +![](images/038d2bbae9cf1c2fdbd902801b99e676f5994b952146770064227da7543f2736.jpg) + +# A.5 PROOF OF THEOREM 4 + +Proof. Let $\Psi \in \mathcal{C}(V,W)$ be an arbitrary $G$ equivariant function, $\mathcal{F}$ a bounded $G$ equivariant frame over a frame-finite domain $K$ . Let $c > 0$ be the constant from Definition 1. For arbitrary $X \in K$ , + +$$ +\begin{array}{l} \left\| \Psi (X) - \langle \Phi \rangle_ {\mathcal {F}} (X) \right\| _ {W} = \left\| \langle \Psi \rangle_ {\mathcal {F}} (X) - \langle \Phi \rangle_ {\mathcal {F}} (X) \right\| _ {W} \\ \leq \frac {1}{| \mathcal {F} (X) |} \sum_ {g \in \mathcal {F} (X)} \| \rho_ {2} (g) \Psi (\rho_ {1} (g) ^ {- 1} X) - \rho_ {2} (g) \Phi (\rho_ {1} (g) ^ {- 1} X) \| _ {W} \\ \leq \max _ {g \in \mathcal {F} (X)} \| \rho_ {2} (g) \| _ {\mathrm {o p}} \| \Psi - \Phi \| _ {K _ {\mathcal {F}}, W} \\ \leq c \| \Psi - \Phi \| _ {K _ {\mathcal {F}}, W} \\ \end{array} +$$ + +where in the first equality we used the fact that $\langle \Psi \rangle_{\mathcal{F}} = \Psi$ since $\Psi$ is already equivariant. + +![](images/f8980eae73450b2c93585890777357bf62914df858e02fc8689f706fdb462c10.jpg) + +# A.6 PROOF OF PROPOSITION 1 + +Proof. Let us prove $\mathcal{F}$ is equivariant (equation 3). Consider a transformation $g = (\pmb {R},\pmb {t})\in G$ , and let $(O,s)\in \mathcal{F}(X)$ , then + +$$ +\boldsymbol {C} = \boldsymbol {O} \boldsymbol {\Lambda} \boldsymbol {O} ^ {T}, \quad \boldsymbol {s} = \frac {1}{n} \boldsymbol {X} ^ {T} \boldsymbol {1}, +$$ + +where $O \Lambda O^T$ is the eigen decomposition of $C$ . Note that the group product of these transformations is + +$$ +(R, t) (O, s) = (R O, R s + t). +$$ + +We need to show $(\boldsymbol{R}\boldsymbol{O},\boldsymbol{R}\boldsymbol{s} + \boldsymbol{t})\in \mathcal{F}((\boldsymbol{R},\boldsymbol{t})\boldsymbol {X})$ . Indeed, $RO\in O(3)$ and consists of eigenvectors of $\pmb {C} = \pmb {R}\pmb{X}^{T}(\pmb {I} - \frac{1}{n}\pmb{1}\pmb{1}^{T})\pmb {X}\pmb{R}^{T}$ as can be verified with a direct computation. If $O,R\in SO(d)$ then also $RO\in SO(d)$ . Furthermore + +$$ +\frac {1}{n} \left(\boldsymbol {X} \boldsymbol {R} ^ {T} + \boldsymbol {1} \boldsymbol {t} ^ {T}\right) ^ {T} \boldsymbol {1} = \frac {1}{n} \left(\boldsymbol {R} \boldsymbol {X} ^ {T} + \boldsymbol {t} \boldsymbol {1} ^ {T}\right) \boldsymbol {1} = \boldsymbol {R} \boldsymbol {s} + \boldsymbol {t} +$$ + +as required. We have shown $(\boldsymbol{R}, t) \mathcal{F}(\boldsymbol{X}) \subset \mathcal{F}((\boldsymbol{R}, t) \boldsymbol{X})$ for all $\boldsymbol{X}$ and $(\boldsymbol{R}, t)$ . To show the other inclusion let $\boldsymbol{X} = (\boldsymbol{R}, t)^{-1} \boldsymbol{Y}$ and get $\mathcal{F}((\boldsymbol{R}, t)^{-1} \boldsymbol{Y}) \subset (\boldsymbol{R}, t)^{-1} \mathcal{F}(\boldsymbol{Y})$ that also holds for all $\boldsymbol{Y}$ and $(\boldsymbol{R}, t)$ . In particular $(\boldsymbol{R}, t) \mathcal{F}(\boldsymbol{X}) \supset \mathcal{F}((\boldsymbol{R}, t) \boldsymbol{X})$ . The frame $\mathcal{F}$ is bounded since for compact $K \subset \mathbb{R}^{n \times d}$ , the translations $n^{-1} \boldsymbol{X}^T \mathbf{1}$ are compact and therefore uniformly bounded for $X \in K$ , and orthogonal matrices always satisfy $\| \boldsymbol{R} \|_2 = 1$ . + +# A.7 PROOF OF PROPOSITION 2 + +Proof. First note that by definition $S(\mathbf{X})$ is equivariant in rows, namely $S(\rho_1(g)\mathbf{X}) = \rho_1(g)S(\mathbf{X})$ . Therefore if $g \in \mathcal{F}(\mathbf{X}) \subset S_n$ , then by definition of the frame $\rho_1(g)\mathbf{S}$ is sorted. Therefore, $\rho_1(g)\mathbf{S} = \rho_1(gh)\rho_1(h)^{-1}\mathbf{S}(\mathbf{X}) = \rho_1(gh)\mathbf{S}(\rho_1(h)^{-1}\mathbf{X})$ is sorted and we get that $gh \in \mathcal{F}(\rho_1(h)^{-1}\mathbf{X})$ . We proved $\mathcal{F}(\mathbf{X})h \subset \mathcal{F}(\rho_1(h)^{-1}\mathbf{X})$ for all $h \in G$ and all $\mathbf{X} \in V$ . Taking $\mathbf{X} = \rho_1(h)\mathbf{Y}$ for an arbitrary $\mathbf{Y} \in V$ , and $h = g^{-1}$ for arbitrary $g \in G$ , we get $\mathcal{F}(\rho_1(g^{-1})\mathbf{Y}) \subset \mathcal{F}(\mathbf{Y})g$ , for all $g \in G$ and $\mathbf{Y} \in V$ . We proved $\mathcal{F}(\mathbf{X})h = \mathcal{F}(\rho_1(h)^{-1}\mathbf{X})$ , which amounts to right action equivariance, see equation 4. The frame is bounded since $\rho_1(G)$ is a finite set. + +# B EMPIRICAL FRAME ANALYSIS + +Repeating eigenvalues. We empirically test the likeli-ness of repeating eigenvalues in the covariance matrix used from the definition of frame in Section 3.1. We use the data from the $n$ -body dataset (Satorras et al., 2021). Let $\pmb {X}\in \mathbb{R}^{n\times 3}$ represent the set of particles locations (each set centered around O and scaled to have $\max_{x_i\in X}\left\| x_i\right\| _2 = 1)$ and $\lambda_1\leq \lambda_2\leq \lambda_3$ be the eigenvalues, sorted in increasing order, of the covariance matrix $C = (X - 1t^T)^T (X - 1t^T)$ . In order to measure the proximity of eigenvalues across the dataset we use the notion of eigenvalues spacing $s_i = \frac{\lambda_{i + 1} - \lambda_i}{\bar{s}}$ $i = 1,2$ where $\bar{s} = \frac{\lambda_3 - \lambda_1}{2}$ is the mean spacing. Furthermore we define $s_{min} = \min \left\{s_1,s_2\right\}$ as the minimal normalized spacing, a ratio that indicates how close the spectrum is to having repeating eigenvalues. Figure 3 presents a histogram of + +the minimal spacing over the training set of the $n$ -body dataset (consists of 3000 particles sets). The minimal spacing encountered in this experiment is of order $10^{-2}$ . This empirically justifies the usage of finite frames for $E(d)$ equivariance. + +![](images/f87aa1ae4569312bbf0df8c8e015991bcc65d3fb6a20409e91d4c1326c56a023.jpg) +Figure 3: Minimal spacing histogram over the training set of $n$ -body dataset (Satorras et al., 2021). + +Frame stability. Here we test stability of our $O(d)$ frame defined in Section 3.1. By stability we mean the magnitude of change of a frame w.r.t. the change in the input $\mathbf{X}$ . A desired attribute of the constructed frame is to be stable, that is, to exhibit small changes if the input is perturbed. We quantify this stability by comparing the distance between our frames to frames constructed by a noisy input. As in the previous experiment used the data from the $n$ -body dataset (Satorras et al., 2021). Let $\mathbf{X} \in \mathbb{R}^{n \times 3}$ represent the set of particles locations (each set centered around 0 and scaled to have $\max_{x_i \in \mathbf{X}} \| x_i \|_2 = 1$ ) and $\mathbf{X}_{\sigma} = \mathbf{X} + \mathbf{Z}$ where $z_{i,j} \sim \mathcal{N}(0,\sigma)$ is the noisy input sample. We compute $\mathcal{F}(\mathbf{X})$ and $\mathcal{F}(\mathbf{X}_{\sigma})$ and choose representatives from each set $g = (\mathbf{R}, t) \in \mathcal{F}(\mathbf{X})$ , $g_{\sigma} = (\mathbf{R}_{\sigma}, t_{\sigma}) \in \mathcal{F}(\mathbf{X}_{\sigma})$ . We measure the distance between the frames as a function of the representatives - + +![](images/5e3955bd657485f4c8778d912c2587ca322bed8492c066e5bc445c6a9cc36ae9.jpg) +Figure 4: Distance between original and noisy frames as a function of $\sigma$ (we plot average and std). The result is reported over the training set of $n$ -body dataset (Satorras et al., 2021). + +$$ +E \big (\boldsymbol {R}, \boldsymbol {R} _ {\sigma} \big) = \frac {1}{3} \sum_ {i = 1} ^ {3} \sqrt {1 - \langle \boldsymbol {R} _ {: , i} , (\boldsymbol {R} _ {\sigma}) _ {: , i} \rangle^ {2}} +$$ + +Notice that in the case of simple spectrum the distance is invariant to the selection of representatives. In Figure 4 we plot distance of original and noisy frames (and its standard deviation) as function of noise level $\sigma$ . The plot validates the continuity of frames in the simple spectrum case. + +![](images/685627606fc068a03c4316af136610ff21b24156b691f36128c5c427aa2171c2.jpg) +Figure 5: FA-Local-PointNet architecture. + +# C IMPLEMENTATION DETAILS + +# C.1 POINT CLOUDS: NORMAL ESTIMATION + +Here we provide implementation details for the experiment in section 5.1. + +PointNet architecture. Our backbone PointNet is based on the object part segmentation network from Qi et al. (2017a). The network consists of layers of the form + +$$ +\operatorname {F C} \left(n, d _ {\text {i n}}, d _ {\text {o u t}}\right): X \mapsto \nu \left(\boldsymbol {X} \boldsymbol {W} + \boldsymbol {1 b} ^ {T}\right) +$$ + +$$ +\operatorname {M a x P o o l} \left(n, d _ {\text {i n}}\right): X \mapsto 1 [ \max X e _ {i} ] +$$ + +where $\mathbf{X} \in \mathbb{R}^{n \times d_{\mathrm{in}}}$ , $\mathbf{W} \in \mathbb{R}^{d_{\mathrm{in}} \times d_{\mathrm{out}}}$ , $\mathbf{b} \in \mathbb{R}^{d_{\mathrm{out}}}$ are the learnable parameters, $\mathbf{1} \in \mathbb{R}^n$ is the vector of all ones, $[\cdot]$ is the concatenation operator, $e_i$ is the standard basis in $\mathbb{R}^{d_{\mathrm{in}}}$ , and $\nu$ is the ReLU activation. In this experiment, for the PointNet baseline and for $\Phi_{3,3}$ (the backbone in FA-PointNet), we used the following architecture: + +$$ +\operatorname {F C} (5 1 2, 3, 6 4) \stackrel {{L _ {1}}} {{\to}} \operatorname {F C} (5 1 2, 6 4, 1 2 8) \stackrel {{L _ {2}}} {{\to}} \operatorname {F C} (5 1 2, 1 2 8, 1 2 8) \stackrel {{L _ {3}}} {{\to}} \operatorname {F C} (5 1 2, 1 2 8, 5 1 2) \stackrel {{L _ {4}}} {{\to}} +$$ + +$$ +\operatorname {F C} (5 1 2, 5 1 2, 2 0 4 8) \stackrel {L _ {5}} {\rightarrow} \operatorname {M a x P o o l} (5 1 2, 2 0 4 8) \stackrel {L _ {6}} {\rightarrow} \left[ L _ {1}, L _ {2}, L _ {3}, L _ {4}, L _ {5}, L _ {6} \right] \stackrel {L _ {7}} {\rightarrow} +$$ + +$$ +\operatorname {F C} (5 1 2, 4 0 2 8, 2 5 6) \stackrel {{L _ {8}}} {\rightarrow} \operatorname {F C} (5 1 2, 2 5 6, 1 2 8) \stackrel {{L _ {9}}} {\rightarrow} \operatorname {F C} (5 1 2, 1 2 8, 3). +$$ + +Note that the original PointNet network also contains two T-Net networks, applied to the input and to $L_{3}$ (the output of the third layer). Similarly, our baseline implementation made a use of the same T-Net networks. Note that the T-Net networks were not part of our FA-PointNet backbone architectures $\Phi_{3,3}$ . + +The FA-Local-Pointnet architecture can be seen as a composition of two parts (see Figure 5). The first part outputs equivariant features by applying the same MLP backbone $\Upsilon_{3k,3d}$ with FA on each point's k-nn patch. Then, from each point's equivariant features, the second part of the network outputs an equivariant normal estimation. Note that the second part, $\langle \Phi \rangle_{\mathcal{F}}$ , is an FA-Pointnet network applied to a point cloud of dimensions $\mathbb{R}^{n\times 3d}$ . For FA-Local-PointNet we made the following design choices. Each point patch is constructed as its $k$ nearest neighbors, with $k = 20$ , $X_{i}\in \mathbb{R}^{20\times 3}$ . Then, the backbone $\Upsilon_{3*20,3*42}$ is applied to all patches, with the following PointNet layers: + +$$ +\mathrm {F C} (5 1 2, 3 * 2 0, 3 * 2 1) \rightarrow \mathrm {F C} (5 1 2, 3 * 2 1, 3 * 4 2) \rightarrow \mathrm {F C} (5 1 2, 3 * 4 2, 3 * 4 2). +$$ + +The second backbone, $\Phi_{3*42,3}$ , is built from the following PointNet layers: + +$$ +\operatorname {F C} (5 1 2, 3 * 3 2, 1 2 8) \stackrel {{L _ {1}}} {\rightarrow} \operatorname {F C} (5 1 2, 1 2 8, 2 5 6) \stackrel {{L _ {2}}} {\rightarrow} \operatorname {F C} (5 1 2, 2 5 6, 2 5 6) \stackrel {{L _ {3}}} {\rightarrow} \operatorname {F C} (5 1 2, 2 5 6, 5 1 2) \stackrel {{L _ {4}}} {\rightarrow} +$$ + +$$ +\operatorname {F C} (5 1 2, 5 1 2, 2 0 4 8) \stackrel {{L _ {5}}} {\rightarrow} \operatorname {M a x P o o l} (5 1 2, 2 0 4 8) \stackrel {{L _ {6}}} {\rightarrow} \left[ L _ {1}, L _ {2}, L _ {3}, L _ {4}, L _ {5}, L _ {6} \right] \stackrel {{L _ {7}}} {\rightarrow} +$$ + +$$ +\operatorname {F C} (5 1 2, 5 2 4 8, 2 5 6) \stackrel {{L _ {8}}} {\rightarrow} \operatorname {F C} (5 1 2, 2 5 6, 1 2 8) \stackrel {{L _ {9}}} {\rightarrow} \operatorname {F C} (5 1 2, 1 2 8, 3). +$$ + +DGCNN architecture. Our backbone DGCNN architecture, $\Phi_{3,3}$ , is based on the object part segmentation network from Wang et al. (2018). It consists of EdgeConv $(n, d_{\mathrm{in}}, d_{\mathrm{out}})$ , FC $(n, d_{\mathrm{in}}, d_{\mathrm{out}})$ , and MaxPool layers. + +$$ +\operatorname {E d g e C o n v} (5 1 2, 3, 6 4) \stackrel {{L _ {1}}} {\rightarrow} \operatorname {E d g e C o n v} (5 1 2, 6 4, 6 4) \stackrel {{L _ {2}}} {\rightarrow} \operatorname {E d g e C o n v} (5 1 2, 6 4, 6 4) \stackrel {{L _ {3}}} {\rightarrow} +$$ + +$$ +\operatorname {F C} (5 1 2, 6 4, 1 0 2 4) \stackrel {L _ {4}} {\rightarrow} \operatorname {M a x P o o l} (5 1 2, 1 0 2 4) \stackrel {L _ {5}} {\rightarrow} \left[ L _ {1}, L _ {2}, L _ {3}, L _ {4}, L _ {5} \right] \stackrel {L _ {6}} {\rightarrow} +$$ + +$$ +\mathrm {F C} (5 1 2, 1 2 1 6, 2 5 6) \stackrel {L _ {7}} {\rightarrow} \mathrm {F C} (5 1 2, 2 5 6, 1 2 8) \stackrel {L _ {8}} {\rightarrow} \mathrm {F C} (5 1 2, 1 2 8, 3). +$$ + +Note that the DGCNN architecture incorporates T-Net network applied to the input. + +Training details. We trained our networks using the ADAM (Kingma & Ba, 2014) optimizer, setting the batch size to 32 and 16 for PointNet and DGCNN respectively. We set a fixed learning rate of 0.001. All models were trained for 250 epochs. Training was done on a single Nvidia V-100 GPU, using PYTORCH deep learning framework (Paszke et al., 2019). + +# C.2 GRAPHS: EXPRESSIVE POWER + +We provide implementation details for the experiments in 5.2. We used two different universal backbones, MLP and GIN equipped with identifiers. The details of those architectures are presented here. + +FA/GA GIN+ID architecture. The GIN+ID backbone is based on the GIN (Xu et al., 2018a) network with an addition of identifiers as node features in order to increase expressiveness. For the experiments we used a three-layer GIN with a feature dimension of size 64 and a ReLU activation function. For the added identifiers we defined the input node features as $\mathbf{X}^0 = [X^0, I_n]$ , where $n$ is the number of nodes in the graph, and to handle graphs of different sizes (EXP dataset) we padded the node features with zeros to fit the size of the maximal graph. Note that we did not apply the permutation generated by the frame on the identifiers. + +FA/GA MLP architecture. We used two different MLP networks for the EXP dataset and the GRAPH8c, due to the different size of graphs in the datasets. In the EXP dataset the maximal graph size is 64 and every node in the graph has a one-dimensional binary feature, therefore the input for the MLP network is a flatten representation of the graph (with additional padding according to the graph size) $\pmb{x} \in \mathbb{R}^{64^2 + 64}$ . Our architecture consists of layers of the form + +$$ +\operatorname {F C} \left(d _ {\text {i n}}, d _ {\text {o u t}}\right): \boldsymbol {x} \mapsto \nu (\boldsymbol {W} \boldsymbol {x} + \boldsymbol {b}) +$$ + +where $\pmb{W} \in \mathbb{R}^{d_{\mathrm{out}} \times d_{\mathrm{in}}}$ , $\pmb{b} \in \mathbb{R}^{d_{\mathrm{out}}}$ are the learnable parameters. The final output of the network is denoted by $(d_{out})$ where $d_{out}$ is the output dimension. + +The MLP network structure for the EXP-classify task: + +$$ +\operatorname {F C} (4 1 6 0, 2 0 4 8) \stackrel {L _ {1}} {\rightarrow} \operatorname {F C} (2 0 4 8, 4 0 9 6) \stackrel {L _ {2}} {\rightarrow} \operatorname {F C} (4 0 9 6, 2 0 4 8) \stackrel {L _ {3}} {\rightarrow} +$$ + +$$ +\operatorname {F C} (2 0 4 8, 1 0) \stackrel {L _ {4}} {\rightarrow} \operatorname {F C} (1 0, 1) \stackrel {L _ {5}} {\rightarrow} (1) +$$ + +with ReLU as the activation function. For the EXP task, which has 10-dimensional output for each graph we omitted the last layer. The GRAPH8c is composed of all the non-isomorphic connected graphs of size 8, hence we did not used any padding of the input here. The nodes have no features and we just used a flatten version of the adjacency matrix. The architecture for the GRAPH8c task: + +$$ +\operatorname {F C} (6 4, 1 2 8) \stackrel {L _ {1}} {\rightarrow} \operatorname {F C} (1 2 8, 6 4) \stackrel {L _ {2}} {\rightarrow} \operatorname {F C} (6 4, 1 0) \stackrel {L _ {3}} {\rightarrow} (1 0) +$$ + +Training details. We followed the protocol from (Balcilar et al., 2021) and trained our model with batch size 100 for 200 epochs. The learning rate was set to 0.001 and did not change during training. For optimization we used the ADAM optimizer. Training was done on a single Nvidia RTX-8000 GPU, using PYTORCH deep learning framework. + +Invariance Evaluation. We quantify the permutation invariance of a model $\phi$ by comparing output of randomly permuted graphs $\rho_{1}(g_{i})\mathbf{X}$ , $g_{i} \in S_{n}$ , $i \in [50]$ . The evaluation metric used is the invariance error, defined by + +$$ +\frac {1}{m} \sum_ {i = 1} ^ {m} \| \phi (\rho_ {1} (g _ {i}) \mathbf {X}) - \mathbf {v} \| _ {2}, +$$ + +where $\pmb{v} = \frac{1}{m}\sum_{i = 1}^{m}\phi (\rho_{1}(g_{i})\mathbf{X})$ + +The permutation invariance error of the models (FA/GA) was measured as a function of the sample size $k$ . We iterated over the entire GRAPH8c dataset and for every graph computed the invariance error for the FA-MLP, GA-MLP and regular MLP models (all with the exact same backbone network). The results presented in Figure 2 (left) are normalized by the error of the regular MLP model. The backbone MLP we used for this experiment is of the form: + +$$ +\operatorname {F C} (6 4, 1 2 8) \stackrel {L _ {1}} {\rightarrow} \operatorname {F C} (1 2 8, 1 2 8) \stackrel {L _ {2}} {\rightarrow} \operatorname {F C} (1 2 8, 1 2 8) \stackrel {L _ {3}} {\rightarrow} \operatorname {F C} (1 2 8, 1 0) \stackrel {L _ {4}} {\rightarrow} (1 0) +$$ + +# C.3 GRAPHS: $n$ -BODY PROBLEM + +This section describes the implementation details for the $n$ -body experiment from Section 5.3. + +FA-GNN architecture. We use the GNN architecture of Gilmer et al. (2017) our base GNN layer, where for each node $i \in [n]$ the update rule follows, + +$$ +\begin{array}{l} \boldsymbol {m} _ {i j} = \phi_ {e} \left(\boldsymbol {h} _ {i} ^ {l}, \boldsymbol {h} _ {j} ^ {l}, a _ {i j}\right) \\ \boldsymbol {m} _ {i} = \sum_ {j \in \mathcal {N} (i)} \boldsymbol {m} _ {i j} \\ \boldsymbol {h} _ {i} ^ {l + 1} = \phi_ {h} \left(\boldsymbol {h} _ {i} ^ {l}, \boldsymbol {m} _ {i}\right) \\ \end{array} +$$ + +where $\mathcal{N}(i)$ are the indices of neighbors of vertex $i$ , $h_i^l$ is the embedding of node $i$ at layer $l$ and $a_{ij} \in \{-1, 1\} \times \mathbb{R}^+$ are edge attributes representing attraction or repelling between pairs of particles and their distance. $h^0 \in \mathbb{R}^{n \times 6}$ represents the nodes input features, which in this experiment are a concatenation of the nodes initial 3D position (rotation and translation equivariant) and velocities (rotation-equivariant and translation-invariant). We used node feature dimension of size 60 to maintain a fair comparison with the baselines (Satorras et al., 2021). The functions $\phi_e$ and $\phi_h$ are implemented as a two-layer MLP with the SiLU activation function (also chosen for consistency purposes) with an hidden dimension of 121 and 120 respectively. To maintain fair comparison with (Satorras et al., 2021) our network is composed of 4 GNN layers. $\Phi_{6,3d'}^{(1)}$ has an additional linear embedding of the features as a prefix to the GNN layers while $\Phi_{3d',3}^{(4)}$ is equipped with a two-layer MLP (SiLU activation) as a decoder to extract final positions. + +Training details. We followed the protocol from (Satorras et al., 2021) and trained our model with batch size 100 for 10000 epochs. The learning rate was set to 0.001 and did not changed during training. For optimization we used the ADAM optimizer. Training was done on a single Nvidia RTX-6000 GPU, using PYTORCH deep learning framework. \ No newline at end of file diff --git a/frameaveragingforinvariantandequivariantnetworkdesign/images.zip b/frameaveragingforinvariantandequivariantnetworkdesign/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b4ec77aea0d9cfe2d53819f96b00d15a4df152a9 --- /dev/null +++ b/frameaveragingforinvariantandequivariantnetworkdesign/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b7c6e19da9b24cdccaf3f338c84d37e795d2f37b2c82fe024217bc76fe85af9 +size 557357 diff --git a/frameaveragingforinvariantandequivariantnetworkdesign/layout.json b/frameaveragingforinvariantandequivariantnetworkdesign/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..dad73a3e85f9d59d007ff8bb6b1307a9133a90fe --- /dev/null +++ b/frameaveragingforinvariantandequivariantnetworkdesign/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97d6dfffb17d2173ea16cfc9999f0983e227c2c2eeb2bce9debd015a974ef2fc +size 1055573 diff --git a/geodiffageometricdiffusionmodelformolecularconformationgeneration/59c9d29d-76c3-40e7-91ab-333d824d6b88_content_list.json b/geodiffageometricdiffusionmodelformolecularconformationgeneration/59c9d29d-76c3-40e7-91ab-333d824d6b88_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..57254e8f1da0650e459db4c8cfcef902bd3bb3f9 --- /dev/null +++ b/geodiffageometricdiffusionmodelformolecularconformationgeneration/59c9d29d-76c3-40e7-91ab-333d824d6b88_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51ebf3709dba76dd5a43537c7d719417e9051ee4086e8124b28b01b32e8e01ab +size 124423 diff --git a/geodiffageometricdiffusionmodelformolecularconformationgeneration/59c9d29d-76c3-40e7-91ab-333d824d6b88_model.json b/geodiffageometricdiffusionmodelformolecularconformationgeneration/59c9d29d-76c3-40e7-91ab-333d824d6b88_model.json new file mode 100644 index 0000000000000000000000000000000000000000..fafdf995acfe50166815eff9a784eb6b7514c77e --- /dev/null +++ b/geodiffageometricdiffusionmodelformolecularconformationgeneration/59c9d29d-76c3-40e7-91ab-333d824d6b88_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5831f6adcfc732720e650b13d1826404d86d4213cba852d973e0561cafb6e0b +size 147804 diff --git a/geodiffageometricdiffusionmodelformolecularconformationgeneration/59c9d29d-76c3-40e7-91ab-333d824d6b88_origin.pdf b/geodiffageometricdiffusionmodelformolecularconformationgeneration/59c9d29d-76c3-40e7-91ab-333d824d6b88_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..885ea9cca2d9b0e4e501505b557d7262c05c0e3e --- /dev/null +++ b/geodiffageometricdiffusionmodelformolecularconformationgeneration/59c9d29d-76c3-40e7-91ab-333d824d6b88_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c67158b190a899f9a29c1628aa7d8b787352fb7c7337dab864e283842115d2a7 +size 5165279 diff --git a/geodiffageometricdiffusionmodelformolecularconformationgeneration/full.md b/geodiffageometricdiffusionmodelformolecularconformationgeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a63543be983fafa0126ebd68596e57743eee69f0 --- /dev/null +++ b/geodiffageometricdiffusionmodelformolecularconformationgeneration/full.md @@ -0,0 +1,472 @@ +# GEODIFF: A GEOMETRIC DIFFUSION MODEL FOR MOLECULAR CONFORMATION GENERATION + +Minkai Xu $^{1,2}$ , Lantao Yu $^{3}$ , Yang Song $^{3}$ , Chence Shi $^{1,2}$ , Stefano Ermon $^{3*}$ , Jian Tang $^{1,4,5*}$ + +1Mila - Québec AI Institute, Canada 2Université de Montréal, Canada + +3Stanford University, USA 4HEC Montréal, Canada 5CIFAR AI Research Chair + +{minkai.xu, chence.shi}@umontreal.ca + +{lantaoyu, yangsong, ermon}@cs.stanford.edu + +jian.tang@hec.ca + +# ABSTRACT + +Predicting molecular conformations from molecular graphs is a fundamental problem in cheminformatics and drug discovery. Recently, significant progress has been achieved with machine learning approaches, especially with deep generative models. Inspired by the diffusion process in classical non-equilibrium thermodynamics where heated particles will diffuse from original states to a noise distribution, in this paper, we propose a novel generative model named GEODIFF for molecular conformation prediction. GEODIFF treats each atom as a particle and learns to directly reverse the diffusion process (i.e., transforming from a noise distribution to stable conformations) as a Markov chain. Modeling such a generation process is however very challenging as the likelihood of conformations should be roto-translational invariant. We theoretically show that Markov chains evolving with equivariant Markov kernels can induce an invariant distribution by design, and further propose building blocks for the Markov kernels to preserve the desirable equivariance property. The whole framework can be efficiently trained in an end-to-end fashion by optimizing a weighted variational lower bound to the (conditional) likelihood. Experiments on multiple benchmarks show that GEODIFF is superior or comparable to existing state-of-the-art approaches, especially on large molecules. $^{1}$ + +# 1 INTRODUCTION + +Graph representation learning has achieved huge success for molecule modeling in various tasks ranging from property prediction (Gilmer et al., 2017; Duvenaud et al., 2015) to molecule generation (Jin et al., 2018; Shi et al., 2020), where typically a molecule is represented as an atom-bond graph. Despite its effectiveness in various applications, a more intrinsic and informative representation for molecules is the 3D geometry, also known as conformation, where atoms are represented as their Cartesian coordinates. The 3D structures determine the biological and physical properties of molecules and hence play a key role in many applications such as computational drug and material design (Thomas et al., 2018; Gebauer et al., 2021; Jing et al., 2021; Batzner et al., 2021). Unfortunately, how to predict stable molecular conformation remains a challenging problem. Traditional methods based on molecular dynamics (MD) or Markov chain Monte Carlo (MCMC) are very computationally expensive, especially for large molecules (Hawkins, 2017). + +Recently, significant progress has been made with machine learning approaches, especially with deep generative models. For example, Simm & Hernandez-Lobato (2020); Xu et al. (2021b) studied predicting atomic distances with variational autoencoders (VAEs) (Kingma & Welling, 2013) and flow-based models (Dinh et al., 2017) respectively. Shi et al. (2021) proposed to use denoising score matching (Song & Ermon, 2019; 2020) to estimate the gradient fields over atomic distances, through which the gradient fields over atomic coordinates can be calculated. Ganea et al. (2021) studied generating conformations by predicting both bond lengths and angles. As molecular conformations are roto-translational invariant, these approaches circumvent directly modeling atomic coordinates by leveraging intermediate geometric variables such as atomic distances, bond and torsion angles, which + +are roto-translational invariant. As a result, they are able to achieve very compelling performance. However, as all these approaches seek to indirectly model the intermediate geometric variables, they have inherent limitations in either training or inference process (see Sec. 2 for a detailed description). Therefore, an ideal solution would still be directly modeling the atomic coordinates and at the same time taking the roto-translational invariance property into account. + +In this paper, we propose such a solution called GEODIFF, a principled probabilistic framework based on denoising diffusion models (Sohl-Dickstein et al., 2015). Our approach is inspired by the diffusion process in nonequilibrium thermodynamics (De Groot & Mazur, 2013). We view atoms as particles in a thermodynamic system, which gradually diffuse from the original states to a noisy distribution in contact with a heat bath. At each time step, stochastic noises are added to the atomic positions. Our high-level idea is learning to reverse the diffusion process, which recovers the target geometric distribution from the noisy distribution. In particular, inspired by recent progress of denoising diffusion models on image generation (Ho et al., 2020; Song et al., 2020), we view the noisy geometries at different timesteps as latent variables, and formulate both the forward diffusion and reverse denoising process as Markov chains. Our goal is to learn the transition kernels such that the reverse process can recover realistic conformations from the chaotic positions sampled from a noise distribution. However, extending existing methods to geometric generation is highly non-trivial: a direct application of diffusion models on the conformation generation task leads to poor generation quality. As mentioned above, molecular conformations are roto-translational invariant, i.e., the estimated (conditional) likelihood should be unaffected by translational and rotational transformations (Köhler et al., 2020). To this end, we first theoretically show that a Markov process starting from an roto-translational invariant prior distribution and evolving with roto-translational equivariant Markov kernels can induce an roto-translational invariant density function. We further provide practical parameterization to define a roto-translational invariant prior distribution and a Markov kernel imposing the equivariance constraints. In addition, we derive a weighted variational lower bound of the conditional likelihood of molecular conformations, which also enjoys the roto-translational invariance and can be efficiently optimized. + +A unique strength of GEODIFF is that it directly acts on the atomic coordinates and entirely bypasses the usage of intermediate elements for both training and inference. This general formulation enjoys several crucial advantages. First, the model can be naturally trained end-to-end without involving any sophisticated techniques like bilevel programming (Xu et al., 2021b), which benefits from small optimization variances. Besides, instead of solving geometries from bond lengths or angles, the one-stage sampling fashion avoids accumulating any intermediate error, and therefore leads to more accurate predicted structures. Moreover, GEODIFF enjoys a high model capacity to approximate the complex distribution of conformations. Thus, the model can better estimate the highly multi-modal distribution and generate structures with high quality and diversity. + +We conduct comprehensive experiments on multiple benchmarks, including conformation generation and property prediction tasks. Numerical results show that GEODIFF consistently outperforms existing state-of-the-art machine learning approaches, and by a large margin on the more challenging large molecules. The significantly superior performance demonstrate the high capacity to model the complex distribution of molecular conformations and generate both diverse and accurate molecules. + +# 2 RELATED WORK + +Recently, various deep generative models have been proposed for conformation generation. Among them, CVGAE (Mansimov et al., 2019) first proposed a VAE model to directly generate 3D atomic coordinates, which fails to preserve the roto-translation equivariance property of conformations and suffers from poor performance. To address this problem, the majority of subsequent models are based on intermediate geometric elements such as atomic distances and torsion angles. A favorable property of these elements is the roto-translational invariance, (e.g. atomic distances does not change when rotating the molecule), which has been shown to be an important inductive bias for molecular geometry modeling (Köhler et al., 2020). However, such a decomposition suffers from several drawbacks for either training or sampling. For example, GRAPHDG (Simm & Hernandez-Lobato, 2020) and CGCF (Xu et al., 2021a) proposed to predict the interatomic distance matrix by VAE and Flow respectively, and then solve the geometry through the Distance Geometry (DG) technique (Liberti et al., 2014), which searches reasonable coordinates that matches with the predicted + +distances. CONFVAE further improves this pipeline by designing an end-to-end framework via bilevel optimization (Xu et al., 2021b). However, all these approaches suffer from the accumulated error problem, meaning that the noise in the predicted distances will misguide the coordinate searching process and lead to inaccurate or even erroneous structures. To overcome this problem, CONFGF (Shi et al., 2021; Luo et al., 2021) proposed to learn the gradient of the log-likelihood w.r.t coordinates. However, in practice the model is still aided by intermediate geometric elements, in that it first estimates the gradient w.r.t interatomic distances via denoising score matching (DSM) (Song & Ermon, 2019; 2020), and then derives the gradient of coordinates using the chain rule. The problem is, by learning the distance gradient via DSM, the model is fed with perturbed distance matrices, which may violate the triangular inequality or even contain negative values. As a consequence, the model is actually learned over invalid distance matrices but tested with valid ones calculated from coordinates, making it suffer from serious out-of-distribution (Hendrycks & Gimpel, 2016) problem. Most recently, another concurrent work (Ganea et al., 2021) proposed a highly systematic (rule-based) pipeline named GEOMOL, which learns to predict a minimal set of geometric quantities (i.e. length and angles) and then reconstruct the local and global structures of the conformation in a sophisticated procedure. Besides, there has also been efforts to use reinforcement learning for conformation search Gogineni et al. (2020). Nevertheless, this method relies on rigid rotor approximation and can only model the torsion angles, and thus fundamentally differs from other approaches. + +# 3 PRELIMINARIES + +# 3.1 NOTATIONS AND PROBLEM DEFINITION + +Notations. In this paper each molecule with $n$ atoms is represented as an undirected graph $\mathcal{G} = \langle \mathcal{V},\mathcal{E}\rangle$ where $\mathcal{V} = \{v_i\}_{i = 1}^n$ is the set of vertices representing atoms and $\mathcal{E} = \{e_{ij}\mid (i,j)\subseteq |\mathcal{V}|\times |\mathcal{V}|\}$ is the set of edges representing inter-atomic bonds. Each node $v_{i}\in \mathcal{V}$ describes the atomic attributes, e.g., the element type. Each edge $e_{ij}\in \mathcal{E}$ describes the corresponding connection between $v_{i}$ and $v_{j}$ , and is labeled with its chemical type. In addition, we also assign the unconnected edges with a virtual type. For the geometry, each atom in $\mathcal{V}$ is embedded by a coordinate vector $c\in \mathbb{R}^3$ into the 3-dimensional space, and the full set of positions (i.e., the conformation) can be represented as a matrix $\mathcal{C} = [c_1,c_2,\dots ,c_n]\in \mathbb{R}^{n\times 3}$ . + +Problem Definition. The task of molecular conformation generation is a conditional generative problem, where we are interested in generating stable conformations for a provided graph $\mathcal{G}$ . Given multiple graphs $\mathcal{G}$ , and for each $\mathcal{G}$ given its conformations $\mathcal{C}$ as i.i.d samples from an underlying Boltzmann distribution (Noé et al., 2019), our goal is learning a generative model $p_{\theta}(\mathcal{C}|\mathcal{G})$ , which is easy to draw samples from, to approximate the Boltzmann function. + +# 3.2 EQUIVARIANCE + +Equivalence is ubiquitous in machine learning for atomic systems, e.g., the vectors of atomic dipoles or forces should rotate accordingly w.r.t. the conformation coordinates (Thomas et al., 2018; Weiler et al., 2018; Fuchs et al., 2020; Miller et al., 2020; Simm et al., 2021; Batzner et al., 2021). It has shown effectiveness to integrate such inductive bias into model parameterization for modeling 3D geometry, which is critical for the generalization capacity (Köhler et al., 2020; Satorras et al., 2021a). Formally, a function $\mathcal{F}:\mathcal{X}\to \mathcal{Y}$ is equivariant w.r.t a group $G$ if: + +$$ +\mathcal {F} \circ T _ {g} (x) = S _ {g} \circ \mathcal {F} (x), \tag {1} +$$ + +where $T_{g}$ and $S_{g}$ are transformations for an element $g \in G$ , acting on the vector spaces $\mathcal{X}$ and $\mathcal{Y}$ , respectively. In this work, we consider the SE(3) group, i.e., the group of rotation, translation in 3D space. This requires the estimated likelihood unaffected with translational and rotational transformations, and we will elaborate on how our method satisfy this property in Sec. 4. + +# 4 GEODIFF METHOD + +In this section, we elaborate on the proposed equivariant diffusion framework. We first present a high level description of our 3D diffusion formulation in Sec. 4.1, based on recent progress of denoising diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020). Then we emphasize several + +![](images/d52fe8f6c1098001a5dd29a3b8e5f44e8d97df9b27854e1101ff3f398be9f8e3.jpg) +Figure 1: Illustration of the diffusion and reverse process of GEODIFF. For diffusion process, noise from fixed posterior distributions $q(\mathcal{C}^t |\mathcal{C}^{t - 1})$ is gradually added until the conformation is destroyed. Symmetrically, for generative process, an initial state $\mathcal{C}^T$ is sampled from standard Gaussian distribution, and the conformation is progressively refined via the Markov kernels $p_{\theta}(\mathcal{C}^{t - 1}|\mathcal{G},\mathcal{C}^t)$ . + +non-trivial challenges of building diffusion models for geometry generation scenario, and show how we technically tackle these issues. Specifically, in Sec. 4.2, we present how we parameterize $p_{\theta}(\mathcal{C}|\mathcal{G})$ so that the conditional likelihood is roto-translational invariant, and in Sec. 4.3, we introduce our surgery of the training objective to make the optimization also invariant of translation and rotation. Finally, we briefly show how to draw samples from our model in Sec. 4.4. + +# 4.1 FORMULATION + +Let $\mathcal{C}^0$ denotes the ground truth conformations and let $\mathcal{C}^t$ for $t = 1,\dots ,T$ be a sequence of latent variables with the same dimension, where $t$ is the index for diffusion steps. Then a diffusion probabilistic model (Sohl-Dickstein et al., 2015) can be described as a latent variable model with two processes: the forward diffusion process, and the reverse generative process. Intuitively, the diffusion process progressively injects small noises to the data $\mathcal{C}^0$ , while the generative process learns to revert the diffusion process by gradually eliminating the noise to recover the ground truth. We provide a high-level schematic of the processes in Fig. 1. + +Diffusion process. Following the physical insight, we model the particles $\mathcal{C}$ as an evolving thermodynamic system. With time going by, the equilibrium conformation $\mathcal{C}^0$ will gradually diffuse to the next chaotic states $\mathcal{C}^t$ , and finally converge into a white noise distribution after $T$ iterations. Different from typical latent variable models, in diffusion model this forward process is defined as a fixed (rather than trainable) posterior distribution $q(\mathcal{C}^{1:T}|\mathcal{C}^0)$ . Specifically, we define it as a Markov chain according to a fixed variance schedule $\beta_{1},\ldots ,\beta_{T}$ : + +$$ +q \left(\mathcal {C} ^ {1: T} \mid \mathcal {C} ^ {0}\right) = \prod_ {t = 1} ^ {T} q \left(\mathcal {C} ^ {t} \mid \mathcal {C} ^ {t - 1}\right), \quad q \left(\mathcal {C} ^ {t} \mid \mathcal {C} ^ {t - 1}\right) = \mathcal {N} \left(\mathcal {C} ^ {t}; \sqrt {1 - \beta_ {t}} \mathcal {C} ^ {t - 1}, \beta_ {t} I\right). \tag {2} +$$ + +Note that, in this work we do not impose specific (invariance) requirement upon the diffusion process, as long as it can efficiently draw noisy samples for training the generative process $p_{\theta}(\mathcal{C}^0)$ . + +Let $\alpha_{t} = 1 - \beta_{t}$ and $\bar{\alpha}_{t} = \prod_{s = 1}^{t}\alpha_{s}$ , a special property of the forward process is that $q(\mathcal{C}^t |\mathcal{C}^0)$ of arbitrary timestep $t$ can be calculated in closed form $q(\mathcal{C}^t |\mathcal{C}^0) = \mathcal{N}(\mathcal{C}^t;\sqrt{\bar{\alpha}_t}\mathcal{C}^0,(1 - \bar{\alpha}_t)I)^2$ . This indicates with sufficiently large $T$ , the whole forward process will convert $\mathcal{C}^0$ to whitened isotropic Gaussian, and thus it is natural to set $p(\mathcal{C}^T)$ as a standard Gaussian distribution. + +Reverse Process. Our goal is learning to recover conformations $\mathcal{C}^0$ from the white noise $\mathcal{C}^T$ , given specified molecular graphs $\mathcal{G}$ . We consider this generative procedure as a reverse dynamics of the above diffusion process, starting from the noisy particles $\mathcal{C}^T \sim p(\mathcal{C}^T)$ . We formulate this reverse dynamics as a conditional Markov chain with learnable transitions: + +$$ +p _ {\theta} \left(\mathcal {C} ^ {0: T - 1} | \mathcal {G}, \mathcal {C} ^ {T}\right) = \prod_ {t = 1} ^ {T} p _ {\theta} \left(\mathcal {C} ^ {t - 1} | \mathcal {G}, \mathcal {C} ^ {t}\right), \quad p _ {\theta} \left(\mathcal {C} ^ {t - 1} | \mathcal {G}, \mathcal {C} ^ {t}\right) = \mathcal {N} \left(\mathcal {C} ^ {t - 1}; \mu_ {\theta} \left(\mathcal {G}, \mathcal {C} ^ {t}, t\right), \sigma_ {t} ^ {2} I\right). \tag {3} +$$ + +Herein $\mu_{\theta}$ are parameterized neural networks to estimate the means, and $\sigma_t$ can be any user-defined variance. The initial distribution $p(\mathcal{C}^T)$ is set as a standard Gaussian. Given a graph $\mathcal{G}$ , its 3D structure is generated by first drawing chaotic particles $\mathcal{C}^T$ from $p(\mathcal{C}^T)$ , and then iteratively refined through the reverse Markov kernels $p_{\theta}(\mathcal{C}^{t - 1}|\mathcal{G},\mathcal{C}^t)$ . + +Having formulated the reverse dynamics, the marginal likelihood can be calculated by $p_{\theta}(\mathcal{C}^0|\mathcal{G}) = \int p(\mathcal{C}^T)p_{\theta}(\mathcal{C}^{0:T - 1}|\mathcal{G},\mathcal{C}^T)\mathrm{d}\mathcal{C}^{1:T}$ . Herein a non-trivial problem is that the likelihood should be invariant w.r.t translation and rotation, which has proved to be a critical inductive bias for 3D object generation (Köhler et al., 2020; Satorras et al., 2021a). In the following subsections, we will elaborate on how we parameterize the Markov kernels $p_{\theta}(\mathcal{C}^{t - 1}|\mathcal{G},\mathcal{C}^t)$ to achieve this desired property, and also how to maximize this likelihood by taking the invariance into account. + +# 4.2 EQUIVARIANT REVERSE GENERATIVE PROCESS + +Instead of directly leveraging existing methods, we consider building the density $p_{\theta}(\mathcal{C}^0)$ that is invariant to rotation and translation transformations. Intuitively, this requires the likelihood to be unaffected by translations and rotations. Formally, let $T_g$ be some roto-translational transformations of a group element $g \in \mathrm{SE}(3)$ , then we have the following statement: + +Proposition 1. Let $p(x_{T})$ be an SE(3)-invariant density function, i.e., $p(x_{T}) = p(T_{g}(x_{T}))$ . If Markov transitions $p(x_{t - 1}|x_t)$ are SE(3)-equivariant, i.e., $p(x_{t - 1}|x_t) = p(T_g(x_{t - 1})|T_g(x_t))$ , then we have that the density $p_{\theta}(x_0) = \int p(x_T)p_{\theta}(x_{0:T - 1}|x_T)\mathrm{d}\pmb{x}_{1:T}$ is also SE(3)-invariant. + +This proposition indicates that the dynamics starting from an invariant standard density along an equivariant Gaussian Markov kernel can result in an invariant density. Now we provide a practical implementation of GEODIFF based on the recent denoising diffusion framework (Ho et al., 2020). + +Invariant Initial Density $p(\mathcal{C}^T)$ . We first introduce the invariant distribution $p(\mathcal{C}^T)$ , which will also be employed in the equivariant Markov chain. We borrow the idea from Kohler et al. (2020) to consider systems with zero center of mass (CoM), termed CoM-free systems. We define $p(\mathcal{C}^T)$ as a "CoM-free standard density" $\hat{\rho}(\mathcal{C})$ , built upon an isotropic normal density $\rho(\mathcal{C})$ : for evaluating the likelihood $\hat{\rho}(\mathcal{C})$ we can firstly translate $\mathcal{C}$ to zero CoM and then calculate $\rho(\mathcal{C})$ , and for sampling from $\hat{\rho}(\mathcal{C})$ we can first sample from $\rho(\mathcal{C})$ and then move the CoM to zero. + +We provide a formal theoretical analysis of $\hat{\rho} (\mathcal{C})$ in Appendix A. Intuitively, the isotropic Gaussian is manifestly invariant to rotations around the zero CoM. And by considering CoM-free system, moving the particles to zero CoM can always ensure the translational invariance. Consequently, $\hat{\rho} (\mathcal{C})$ is constructed as a roto-transitional invariant density. + +Equivariant Markov Kernels $p(\mathcal{C}^{t - 1}|\mathcal{G},\mathcal{C}^t)$ . Similar to the prior density, we also consider equipping all intermediate structures $\mathcal{C}^t$ as CoM-free systems. Specifically, given mean $\mu_{\theta}(\mathcal{G},\mathcal{C}^t,t)$ and variance $\sigma_t$ , the likelihood of $\mathcal{C}^{t - 1}$ will be calculated by $\hat{\rho} (\frac{\mathcal{C}^{t - 1} - \mu_{\theta}(\mathcal{G},\mathcal{C}^t,t)}{\sigma_t})$ . The CoM-free Gaussian ensures the translation invariance in the Markov kernels. Consequently, to achieve the equivariant property defined in Proposition 1, we focus on the rotation equivariance. + +Then in general, the key requirement is to ensure the means $\mu_{\theta}(\mathcal{G},\mathcal{C}^t,t)$ to be roto-translation equivariant w.r.t $\mathcal{C}^t$ . Following Ho et al. (2020), we consider the following parameterization of $\mu_{\theta}$ : + +$$ +\mu_ {\theta} \left(\mathcal {C} ^ {t}, t\right) = \frac {1}{\sqrt {\alpha_ {t}}} \left(\mathcal {C} ^ {t} - \frac {\beta_ {t}}{\sqrt {1 - \bar {\alpha} _ {t}}} \epsilon_ {\theta} \left(\mathcal {G}, \mathcal {C} ^ {t}, t\right)\right), \tag {4} +$$ + +where $\epsilon_{\theta}$ are neural networks with trainable parameters $\theta$ . Intuitively, the model $\epsilon_{\theta}$ learns to predict the noise necessary to decorrupt the conformations. This is analogous to the physical force fields (Schütt et al., 2017; Zhang et al., 2018; Hu et al., 2021; Shuaibi et al., 2021), which also gradually push particles towards convergence around the equilibrium states. + +Now the problem is transformed to constructing $\epsilon_{\theta}$ to be roto-translational equivariant. We draw inspirations from recent equivariant networks (Thomas et al., 2018; Satorras et al., 2021b) to design an equivariant convolutional layer, named graph field network (GFN). In the $l$ -th layer, GFN takes node embeddings $\mathbf{h}^l \in \mathbb{R}^{n \times b}$ ( $b$ denotes the feature dimension) and corresponding coordinate embeddings $\mathbf{x}^l \in \mathbb{R}^{n \times 3}$ as inputs, and outputs $\mathbf{h}^{l+1}$ and $\mathbf{x}^{l+1}$ as follows: + +$$ +\mathbf {m} _ {i j} = \Phi_ {m} \left(\mathbf {h} _ {i} ^ {l}, \mathbf {h} _ {j} ^ {l}, \| \mathbf {x} _ {i} ^ {l} - \mathbf {x} _ {j} ^ {l} \| ^ {2}, e _ {i j}; \theta_ {m}\right) \tag {5} +$$ + +$$ +\mathbf {h} _ {i} ^ {l + 1} = \Phi_ {h} \left(\mathbf {h} _ {i} ^ {l}, \sum_ {j \in \mathcal {N} (i)} \mathbf {m} _ {i j}; \theta_ {h}\right) \tag {6} +$$ + +$$ +\mathbf {x} _ {i} ^ {l + 1} = \sum_ {j \in \mathcal {N} (i)} \frac {1}{d _ {i j}} \left(\mathbf {c} _ {i} - \mathbf {c} _ {j}\right) \Phi_ {x} \left(\mathbf {m} _ {i j}; \theta_ {x}\right) \tag {7} +$$ + +where $\Phi$ are feed-forward networks and $d_{ij}$ denotes interatomic distances. $\mathcal{N}(i)$ denotes the neighborhood of $i^{th}$ node, including both connected atoms and other ones within a radius threshold $\tau$ , which enables the model to explicitly capture long-range interactions and support molecular graphs with disconnected components. Initial embeddings $\mathbf{h}^0$ are combinations of atom and timestep embeddings, and $\mathbf{x}^0$ are atomic coordinates. The main difference between proposed GFN and other GNNs lies in equation 7, where $\mathbf{x}$ is updated as a combination of radial directions weighted by $\Phi_x:\mathbb{R}^b\to \mathbb{R}$ . Such vector field $\mathbf{x}^L$ enjoys the roto-translation equivariance property. Formally, we have: + +Proposition 2. Parameterizing $\epsilon_{\theta}(\mathcal{G},\mathcal{C},t)$ as a composition of $L$ GFN layers, and take the $\mathbf{x}^L$ after $L$ updates as the output. Then the noise vector field $\epsilon_{\theta}$ is SE(3) equivariant w.r.t the 3D system $\mathcal{C}$ . + +Intuitively, given $\mathbf{h}^l$ already invariant and $\mathbf{x}^l$ equivariant, the message embedding $\mathbf{m}$ will also be invariant since it only depends on invariant features. Since $\mathbf{x}$ is updated with the relative differences $\mathbf{c}_i - \mathbf{c}_j$ weighted by invariant features, it will be translation-invariant and rotation-equivariant. Then inductively, composing $\epsilon_{\theta}$ with $L$ GFN layers enables equivariance with $\mathcal{C}^t$ . We provide the formal proof of equivariance properties in Appendix A. + +# 4.3 IMPROVED TRAINING OBJECTIVE + +Having formulated the generative process and the model parameterization, now we consider the practical training objective for the reverse dynamics. Since directly optimizing the exact log-likelihood is intractable, we instead maximize the usual variational lower bound (ELBO) $^3$ : + +$$ +\begin{array}{l} \mathbb {E} \left[ \log p _ {\theta} (\mathcal {C} ^ {0} | \mathcal {G}) \right] = \mathbb {E} \left[ \log \mathbb {E} _ {q (\mathcal {C} ^ {1: T} | \mathcal {C} ^ {0})} \frac {p _ {\theta} (\mathcal {C} ^ {0 : T} | \mathcal {G})}{q (\mathcal {C} ^ {1 : T} | \mathcal {C} ^ {0})} \right] \\ \geq - \mathbb {E} _ {q} \left[ \sum_ {t = 1} ^ {T} D _ {\mathrm {K L}} \left(q \left(\mathcal {C} ^ {t - 1} \mid \mathcal {C} ^ {t}, \mathcal {C} ^ {0}\right) \| p _ {\theta} \left(\mathcal {C} ^ {t - 1} \mid \mathcal {C} ^ {t}, \mathcal {G}\right)\right) \right] := - \mathcal {L} _ {\mathrm {E L B O}} \tag {8} \\ \end{array} +$$ + +where $q(\mathcal{C}^{t - 1}|\mathcal{C}^t,\mathcal{C}^0)$ is analytically tractable as $\mathcal{N}(\frac{\sqrt{\bar{\alpha}_{t - 1}}\beta_t}{1 - \bar{\alpha}_t}\mathcal{C}^0 +\frac{\sqrt{\alpha_t}(1 - \bar{\alpha}_{t - 1})}{1 - \bar{\alpha}_t}\mathcal{C}^t,\frac{1 - \bar{\alpha}_{t - 1}}{1 - \bar{\alpha}_t}\beta_t)^3$ . Most recently, Ho et al. (2020) showed that under the parameterization in equation 4, the ELBO of the diffusion model can be further simplified by calculating the KL divergences between Gaussians as weighted $\mathcal{L}_2$ distances between the means $\epsilon_{\theta}$ and $\epsilon^3$ . Formally, we have: + +Proposition 3. (Ho et al., 2020) Under the parameterization in equation 4, we have: + +$$ +\mathcal {L} _ {\mathrm {E L B O}} = \sum_ {t = 1} ^ {T} \gamma_ {t} \mathbb {E} _ {\left\{\mathcal {C} ^ {0}, \mathcal {G} \right\}} \sim q \left(\mathcal {C} ^ {0}, \mathcal {G}\right), \epsilon \sim \mathcal {N} (0, I) \left[ \| \epsilon - \epsilon_ {\theta} (\mathcal {G}, \mathcal {C} ^ {t}, t) \| _ {2} ^ {2} \right] \tag {9} +$$ + +where $\mathcal{C}^t = \sqrt{\bar{\alpha}_t}\mathcal{C}^0 +\sqrt{1 - \bar{\alpha}_t}\epsilon$ . The weights $\gamma_{t} = \frac{\beta_{t}}{2\alpha_{t}(1 - \bar{\alpha}_{t - 1})}$ for $t > 1$ , and $\gamma_{1} = \frac{1}{2\alpha_{1}}$ . + +The intuition of this objective is to independently sample chaotic conformations of different timesteps from $q(\mathcal{C}^{t - 1}|\mathcal{C}^t,\mathcal{C}^0)$ , and use $\epsilon_{\theta}$ to model the noise vector $\epsilon$ . To yield a better empirical performance, Ho et al. (2020) suggests to set all weights $\gamma_{t}$ as 1, which is in line with the objectives of recent noise conditional score networks (Song & Ermon, 2019; 2020). + +As $\epsilon_{\theta}$ is designed to be equivariant, it is natural to require its supervision signal $\epsilon$ to be equivariant with $\mathcal{C}^t$ . Note that once this is achieved, the ELBO will also become invariant. However, the $\epsilon$ in the forward diffusion process is not imposed with such equivariance, violating the above properties. Here we propose two approaches to obtain the modified noise vector $\hat{\epsilon}$ , which, after replacing $\epsilon$ in the $\mathcal{L}_2$ distance calculation in equation 9, achieves the desired equivariance: + +Alignment approach. Considering the fact that $\epsilon$ can be calculated by $\frac{\mathcal{C}^t - \sqrt{\bar{\alpha}_t}\mathcal{C}^0}{\sqrt{1 - \bar{\alpha}_t}}$ , we can first rotate and translate $\mathcal{C}^0$ to $\hat{\mathcal{C}}^0$ by aligning w.r.t $\mathcal{C}^t$ , and then compute $\hat{\epsilon}$ as $\frac{\mathcal{C}^t - \sqrt{\bar{\alpha}_t}\hat{\mathcal{C}}^0}{\sqrt{1 - \bar{\alpha}_t}}$ . Since the aligned conformation $\hat{\mathcal{C}}^0$ is equivariant with $\mathcal{C}^t$ , the processed $\hat{\epsilon}$ will also enjoy the equivariance. Specifically, the alignment is implemented by first translating $\mathcal{C}^0$ to the same CoM of $\mathcal{C}^t$ and then solve the optimal rotation matrix by Kabsch alignment algorithm (Kabsch, 1976). + +Chain-rule approach. Another meaningful observation is that by reparameterizing the Gaussian distribution $q(\mathcal{C}^t | \mathcal{C}^0)$ as $\mathcal{C}^t = \sqrt{\bar{\alpha}_t} \mathcal{C}^0 + \sqrt{1 - \bar{\alpha}_t} \epsilon$ , $\epsilon$ can be viewed as a weighted score function $\sqrt{1 - \bar{\alpha}_t} \nabla_{\mathcal{C}^t} q(\mathcal{C}^t | \mathcal{C}^0)$ . Shi et al. (2021) recently shows that generally this score function $\nabla_{\mathcal{C}^t} q(\mathcal{C}^t | \cdot)$ can be designed to be equivariant by decomposing it into $\partial_{\mathcal{C}^t} \mathbf{d}^t \nabla_{\mathbf{d}^t} q(\mathcal{C}^t | \cdot)$ with the chain rule, where $\mathbf{d}^t$ can be any invariant features of the structures $\mathcal{C}^t$ such as the inter-atomic distances. We refer readers to Shi et al. (2021) for more details. The insight is that as gradient of invariant variables $w.r.t.$ equivariant variables, the partial derivative $\partial_{\mathcal{C}^t} \mathbf{d}^t$ will always be equivalent with $\mathcal{C}^t$ . In this work, under the common assumption that $\mathbf{d}$ also follows a Gaussian distribution (Kingma & Welling, 2013), our practical implementation is to first approximately calculate $\nabla_{\mathbf{d}^t} q(\mathcal{C}^t | \mathcal{C}^0)$ as $\frac{\mathbf{d}^t - \sqrt{\bar{\alpha}_t} \mathbf{d}^0}{1 - \bar{\alpha}_t}$ , and then compute the modified noise vector $\hat{\epsilon}$ as $\sqrt{1 - \bar{\alpha}_t} \partial_{\mathcal{C}^t} \mathbf{d}^t (\frac{\mathbf{d}^t - \sqrt{\bar{\alpha}_t} \mathbf{d}^0}{1 - \bar{\alpha}_t}) = \frac{\partial_{\mathcal{C}^t} \mathbf{d}^t \cdot (\mathbf{d}^t - \sqrt{\bar{\alpha}_t} \mathbf{d}^0)}{\sqrt{1 - \bar{\alpha}_t}}$ . + +# 4.4 SAMPLING + +With a learned reverse dynamics $\epsilon_{\theta}(\mathcal{G},\mathcal{C}^t,t)$ , the transition means $\mu_{\theta}(\mathcal{G},\mathcal{C}^t,t)$ can be calculated by equation 4. Thus, given a graph $\mathcal{G}$ , its geometry $\mathcal{C}^0$ is generated by first sampling chaotic particles $\mathcal{C}^T\sim p(\mathcal{C}^T)$ , and then progressively sample $\mathcal{C}^{t - 1}\sim p_{\theta}(\mathcal{C}^{t - 1}|\mathcal{G},\mathcal{C}^t)$ for $t = T,T-$ $1,\dots ,1$ . This process is Markovian, which gradually shifts the previous noisy positions towards + +# Algorithm 1 Sampling Algorithm of GEODIFF. + +Input: the molecular graph $\mathcal{G}$ , the learned reverse model $\epsilon_{\theta}$ . + +Output: the molecular conformation $\mathcal{C}$ . + +1: Sample $\mathcal{C}^T\sim p(\mathcal{C}^T) = \mathcal{N}(0,I)$ +2: for $s = T, T - 1, \dots, 1$ do +3: Shift $\mathcal{C}^s$ to zero CoM +4: Compute $\mu_{\theta}(\mathcal{C}^s,\mathcal{G},s)$ from $\epsilon_{\theta}(\mathcal{C}^{s},\mathcal{G},s)$ using equation 4 +5: Sample $\mathcal{C}^{s - 1}\sim \mathcal{N}(\mathcal{C}^{s - 1};\mu_{\theta}(\mathcal{C}^s,\mathcal{G},s),\sigma_t^2 I)$ +6: end for +7: return $\mathcal{C}^0$ as $\mathcal{C}$ + +equilibrium states. We provide the pseudo code of the whole sampling process in Algorithm 1. + +# 5 EXPERIMENT + +In this section, we empirically evaluate GEODIFF on the task of equilibrium conformation generation for both small and drug-like molecules. Following existing work (Shi et al., 2021; Ganea et al., 2021), we test the proposed method as well as the competitive baselines on two standard benchmarks: Conformation Generation (Sec. 5.2) and Property Prediction (Sec. 5.3). We first present the general experiment setups, and then describe task-specific evaluation protocols and discuss the results in each section. The implementation details are provided in Appendix C. + +# 5.1 EXPERIMENT SETUP + +Datasets. Following prior works (Xu et al., 2021a;b), we also use the recent GEOM-QM9 (Ramakrishnan et al., 2014) and GEOM-Drugs (Axelrod & Gomez-Bombarelli, 2020) datasets. The former one contains small molecules while the latter one are medium-sized organic compounds. We borrow the data split produced by Shi et al. (2021). For both datasets, the training split consists of 40,000 molecules with 5 conformations for each, resulting in 200,000 conformations in total. The valid split share the same size as training split. The test split contains 200 distinct molecules, with 22,408 conformations for QM9 and 14,324 ones for Drugs. + +Baselines. We compare GEODIFF with 6 recent or established state-of-the-art baselines. For the ML approaches, we test the following models with highest reported performance: CVGAE (Mansimov et al., 2019), GRAPHDG (Simm & Hernandez-Lobato, 2020), CGCF (Xu et al., 2021a), CONFVAE (Xu et al., 2021b) and CONFGF (Shi et al., 2021). We also test the classic RDKIT (Riniker & Landrum, 2015) method, which is arguably the most popular open-source software for conformation generation. We refer readers to Sec. 2 for a detailed discussion of these models. + +# 5.2 CONFORMATION GENERATION + +Evaluation metrics. The task aims to measure both quality and diversity of generated conformations by different models. We follow Ganea et al. (2021) to evaluate 4 metrics built upon root-mean-square + +Table 1: Results on the GEOM-Drugs dataset, without FF optimization. + +
ModelsCOV-R (%) ↑MAT-R (Å) ↓COV-P (%) ↑MAT-P (Å) ↓
MeanMedianMeanMedianMeanMedianMeanMedian
CVGAE0.000.003.07022.9937----
GRAPHDG8.270.001.97221.98452.080.002.43402.4100
CGCF53.9657.061.24871.224721.6813.721.85711.8066
CONFVAE55.2059.431.23801.141722.9614.051.82871.8159
GEOMOL67.1671.711.08751.0586----
CONFGF62.1570.931.16291.159623.4215.521.72191.6863
GEODIFF-A88.3696.090.87040.862860.1461.251.18641.1391
GEODIFF-C89.1397.880.86290.852961.4764.551.17121.1232
+ +* The COV-R and MAT-R results of CVGAE, GRAPHDG, CGCF, and CONFIGF are borrowed from Shi et al. (2021). The results of GEOMOL are borrowed from a most recent study Zhu et al. (2022). Other results are obtained by our own experiments. The results of all models for the GEOM-QM9 dataset (summarized in Tab. 5) are collected in the same way. + +deviation (RMSD), which is defined as the normalized Frobenius norm of two atomic coordinates matrices, after alignment by Kabsch algorithm (Kabsch, 1976). Formally, let $S_{g}$ and $S_{r}$ denote the sets of generated and reference conformers respectively, then the Coverage and Matching metrics (Xu et al., 2021a) following the conventional Recall measurement can be defined as: + +$$ +\operatorname {C O V} - \mathrm {R} \left(S _ {g}, S _ {r}\right) = \frac {1}{\left| S _ {r} \right|} \left| \left\{\mathcal {C} \in S _ {r} \mid \operatorname {R M S D} (\mathcal {C}, \hat {\mathcal {C}}) \leq \delta , \hat {\mathcal {C}} \in S _ {g} \right\} \right|, \tag {10} +$$ + +$$ +\operatorname {M A T} - \mathrm {R} \left(S _ {g}, S _ {r}\right) = \frac {1}{\left| S _ {r} \right|} \sum_ {\mathcal {C} \in S _ {r}} \min _ {\hat {\mathcal {C}} \in S _ {g}} \operatorname {R M S D} (\mathcal {C}, \hat {\mathcal {C}}), \tag {11} +$$ + +where $\delta$ is a pre-defined threshold. The other two metrics COV-P and MAT-P inspired by Precision can be defined similarly but with the generated and reference sets exchanged. In practice, $S_{g}$ is set as twice of the size of $S_{r}$ for each molecule. Intuitively, the COV scores measure the percentage of structures in one set covered by another set, where covering means the RMSD between two conformations is within a certain threshold $\delta$ . By contrast, the MAT scores measure the average RMSD of conformers in one set with its closest neighbor in another set. In general, higher COV rates or lower MAT score suggest that more realistic conformations are generated. Besides, the Precision metrics depend more on the quality, while the Recall metrics concentrate more on the diversity. Either metrics can be more appealing considering the specific scenario. Following previous works (Xu et al., 2021a; Ganea et al., 2021), $\delta$ is set as $0.5\mathring{\mathrm{A}}$ and $1.25\mathring{\mathrm{A}}$ for QM9 and Drugs datasets respectively. + +Results & discussion. The results are summarized in Tab. 1 and Tab. 5 (left in Appendix, D). As noted in Sec. 4.3, GEODIFF can be trained with two types of modified ELBO, named alignment and chain-rule approaches. We denote models learned by these two objectives as GEODIFF-A and GEODIFF-C respectively. As shown in the tables, GEODIFF consistently outperform the state-of-the-art ML models on all datasets and metrics, especially by a significant margin for more challenging large molecules (Drugs dataset). The results demonstrate the superior capacity of GEODIFF to model the multi modal distribution, and generative both accurate and diverse conformations. We also notice that in general GEODIFF-C performs slightly better than GEODIFF-A, which suggests that chain-rule approach leads to a better optimization procedure. We thus take GEODIFF-C as the representative in the following comparisons. We visualize samples generated by different models in Fig. 2 to provide a qualitative comparison, where GEODIFF is shown to capture better both local and global structures. + +On the more challenging Drugs dataset, we further test RDKIT. As shown in Tab. 2, our observation is in line with previous studies (Shi et al., 2021) that the state-of-the-art ML models (shown in Tab. 1) perform better on COV-R and MAT-R. However, for the new Precision-based metrics we found that ML models are still not comparable. This indicates that ML models tend to explore more possible representatives while RDKIT concentrates on a few most common ones, prioritizes quality over diversity. Previous works (Mansimov et al., 2019; Xu et al., 2021b) suggest that this is because RDKIT involves an additional empirical force field (FF) (Halgren, 1996) to optimize the structure, and we follow them to also combine GEODIFF with FF to yield a more fair comparison. Results in + +
Graph
Reference
GeoDiff
ConfGF
GraphDG
+ +Figure 2: Examples of generated structures from Drugs dataset. For every model, we show the conformation best-aligned with the ground truth. More examples are provided in Appendix E. +Table 2: Results on the GEOM-Drugs dataset, with FF optimization. + +
ModelsCOV-R (%) ↑MAT-R (Å) ↓COV-P (%) ↑MAT-P (Å) ↓
MeanMedianMeanMedianMeanMedianMeanMedian
RDKIT60.9165.701.20261.125272.2288.721.09760.9539
GEODIFF + FF92.27100.000.76180.734084.5195.860.98340.9221
+ +Tab. 2 demonstrate that GEODIFF +FF can keep the superior diversity (Recall metrics) while also enjoy significantly improved accuracy ((Precision metrics)). + +# 5.3 PROPERTY PREDICTION + +Evaluation metrics. This task estimates the molecular ensemble properties (Axelrod & Gomez-Bombarelli, 2020) over a set of generated conformations. This can provide an direct assessment on the quality of generated samples. In specific, we follow Shi et al. (2021) to extract a split from GEOM-QM9 covering 30 molecules, and generate 50 samples for each. Then we use + +Table 3: MAE of predicted ensemble properties in eV. + +
MethodEEminΔεΔεminΔεmax
RDKIT0.92330.65850.36980.80210.2359
GRAPHDG9.10270.88821.79734.17430.4776
CGCF28.96612.84102.835610.63610.5954
CONFVAE8.20800.61001.60803.91110.2429
CONFGF2.78860.17650.46882.18430.1433
GEODIFF0.259740.15510.30910.70330.1909
+ +the chemical toolkit Ps14 (Smith et al., 2020) to calculate each conformer's energy $E$ and HOMO-LUMO gap $\epsilon$ , and compare the average energy $\overline{E}$ , lowest energy $E_{\mathrm{min}}$ , average gap $\overline{\Delta\epsilon}$ , minimum gap $\Delta \epsilon_{\mathrm{min}}$ , and maximum gap $\Delta \epsilon_{\mathrm{max}}$ with the ground truth. + +Results & discussions. The mean absolute errors (MAE) between calculated properties and the ground truth are reported in Tab. 3. CVGAE is excluded due to the poor performance, which is also reported in Simm & Hernandez-Lobato (2020); Shi et al. (2021). The properties are highly sensitive to geometric structure, and thus the superior performance demonstrate that GEODIFF can consistently predict more accurate conformations across different molecules. + +# 6 CONCLUSION + +We propose GEODIFF, a novel probabilistic model for generating molecular conformations. GEODIFF marries denoising diffusion models with geometric representations, where we parameterize the reverse generative dynamics as a Markov chain, and novelly impose roto-translational invariance into the density with equivariant Markov kernels. We derive a tractable invariant objective from the variational lower bound to optimize the likelihood. Comprehensive experiments over multiple tasks demonstrate that GEODIFF is competitive with the existing state-of-the-art models. Future work includes further improving or accelerating the model with other recent progress of diffusion models, and extending our method to other challenging structures such as proteins. + +# ACKNOWLEDGEMENT + +Minkai thanks Huiyu Cai, David Wipf, Zuobai Zhang, and Zhaocheng Zhu for their helpful discussions and comments. This project is supported by the Natural Sciences and Engineering Research Council (NSERC) Discovery Grant, the Canada CIFAR AI Chair Program, collaboration grants between Microsoft Research and Mila, Samsung Electronics Co., Ltd., Amazon Faculty Research Award, Tencent AI Lab Rhino-Bird Gift Fund and a NRC Collaborative R&D Project (AI4D-CORE-06). This project was also partially funded by IVADO Fundamental Research Project grant PRF-2019-3583139727. The Stanford team is supported by NSF(#1651565, #1522054, #1733686), ONR (N000141912145), AFOSR (FA95501910024), ARO (W911NF-21-1-0125) and Sloan Fellowship. + +# REFERENCES + +Mohammed AlQuraishi. End-to-end differentiable learning of protein structure. Cell systems, 8(4): 292-301, 2019. +Simon Axelrod and Rafael Gomez-Bombarelli. Geom: Energy-annotated molecular conformations for property prediction and molecular generation. arXiv preprint arXiv:2006.05531, 2020. +Simon Batzner, Tess E Smidt, Lixin Sun, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, and Boris Kozinsky. Se (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. arXiv preprint arXiv:2101.03164, 2021. +Julian Chibane, Thiemo Alldieck, and Gerard Pons-Moll. Implicit functions in feature space for 3d shape reconstruction and completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6970-6981, 2020. +Sybren Ruurds De Groot and Peter Mazur. Non-equilibrium thermodynamics. Courier Corporation, 2013. +Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP. In ICLR, 2017. +David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, pp. 2224-2232, 2015. +Fabian Fuchs, Daniel Worrall, Volker Fischer, and Max Welling. Se(3)-transformers: 3d roto-translation equivariant attention networks. NeurIPS, 2020. +Octavian-Eugen Ganea, Lagnajit Pattanaik, Connor W Coley, Regina Barzilay, Klavs F Jensen, William H Green, and Tommi S Jaakkola. Geomol: Torsional geometric generation of molecular 3d conformer ensembles. arXiv preprint arXiv:2106.07802, 2021. +Niklas WA Gebauer, Michael Gastegger, Stefan SP Hessmann, Klaus-Robert Muller, and Kristof T Schütt. Inverse design of 3d molecular structures with conditional generative neural networks. arXiv preprint arXiv:2109.04824, 2021. +Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263-1272. JMLR.org, 2017. +T. Gogineni, Ziping Xu, Exequiel Punzalan, Runxuan Jiang, Joshua A Kammeraad, Ambuj Tewari, and P. Zimmerman. Torsionnet: A reinforcement learning approach to sequential conformer search. ArXiv, abs/2006.07078, 2020. +Thomas A Halgren. Merck molecular force field. v. extension of mmff94 using experimental data, additional computational data, and empirical rules. Journal of Computational Chemistry, 17(5-6): 616-641, 1996. +Paul CD Hawkins. Conformation generation: the state of the art. Journal of Chemical Information and Modeling, 57(8):1747-1756, 2017. + +Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136, 2016. +Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. arXiv preprint arXiv:2006.11239, 2020. +Weihua Hu, Muhammed Shuaibi, Abhishek Das, Siddharth Goyal, Anuroop Sriram, Jure Leskovec, Devi Parikh, and Larry Zitnick. Forcenet: A graph neural network for large-scale quantum chemistry simulation. 2021. +John Ingraham, Adam J Riesselman, Chris Sander, and Debora S Marks. Learning protein structure with a differentiable simulator. In International Conference on Learning Representations, 2019. +Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation. arXiv preprint arXiv:1802.04364, 2018. +Bowen Jing, Stephan Eismann, Patricia Suriana, Raphael John Lamarre Townshend, and Ron Dror. Learning from protein structure with geometric vector perceptrons. In International Conference on Learning Representations, 2021. +John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al. Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583-589, 2021. +Wolfgang Kabsch. A solution for the best rotation to relate two sets of vectors. Acta Crystallographica Section A: Crystal Physics, Diffraction, Theoretical and General Crystallography, 32(5):922-923, 1976. +Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In 2nd International Conference on Learning Representations, 2013. +Jonas Kohler, Leon Klein, and Frank Noe. Equivariant flows: Exact likelihood generative learning for symmetric densities. In Proceedings of the 37th International Conference on Machine Learning, 2020. +Leo Liberti, Carlile Lavor, Nelson Maculan, and Antonio Mucherino. Euclidean distance geometry and applications. SIAM review, 56(1):3-69, 2014. +Shitong Luo and Wei Hu. Diffusion probabilistic models for 3d point cloud generation. ArXiv, abs/2103.01458, 2021. +Shitong Luo, Chence Shi, Minkai Xu, and Jian Tang. Predicting molecular conformation via dynamic graph score matching. Advances in Neural Information Processing Systems, 34, 2021. +Elman Mansimov, Omar Mahmood, Seokho Kang, and Kyunghyun Cho. Molecular geometry prediction using a deep generative graph neural network. arXiv preprint arXiv:1904.00314, 2019. +B. Miller, M. Geiger, T. Smidt, and F. Noé. Relevance of rotationally equivariant convolutions for predicting molecular properties. ArXiv, abs/2008.08461, 2020. +Frank Noé, Simon Olsson, Jonas Kühler, and Hao Wu. Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. Science, 365(6457), 2019. +Raghunathan Ramakrishnan, Pavlo O Dral, Matthias Rupp, and O Anatole Von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. Scientific data, 1(1):1-7, 2014. +Sereina Riniker and Gregory A. Landrum. Better informed distance geometry: Using what we know to improve conformation generation. Journal of Chemical Information and Modeling, 55(12): 2562-2574, 2015. +Victor Garcia Satorras, Emiel Hoogeboom, Fabian B Fuchs, Ingmar Posner, and Max Welling. E (n) equivariant normalizing flows for molecule generation in 3d. arXiv preprint arXiv:2105.09016, 2021a. + +Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E(n) equivariant graph neural networks, 2021b. +Kristof Schütt, Pieter-Jan Kindermans, Huziel Enoc Sauceda Felix, Stefan Chmiela, Alexandre Tkatchenko, and Klaus-Robert Müller. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. In Advances in Neural Information Processing Systems, pp. 991-1001. Curran Associates, Inc., 2017. +Andrew W Senior, Richard Evans, John Jumper, James Kirkpatrick, Laurent Sifre, Tim Green, Chongli Qin, Augustin Žídek, Alexander WR Nelson, Alex Bridgland, et al. Improved protein structure prediction using potentials from deep learning. Nature, 577(7792):706-710, 2020. +Chence Shi, Minkai Xu, Zhaocheng Zhu, Weinan Zhang, Ming Zhang, and Jian Tang. Graphaf: a flow-based autoregressive model for molecular graph generation. arXiv preprint arXiv:2001.09382, 2020. +Chence Shi, Shitong Luo, Minkai Xu, and Jian Tang. Learning gradient fields for molecular conformation generation. ArXiv, 2021. +Muhammed Shuaibi, Adeesh Kolluru, Abhishek Das, Aditya Grover, Anuroop Sriram, Zachary Ulissi, and C Lawrence Zitnick. Rotation invariant graph neural networks using spin convolutions. arXiv preprint arXiv:2106.09575, 2021. +Gregor Simm and Jose Miguel Hernandez-Lobato. A generative model for molecular distance geometry. In Hal Daume III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119, pp. 8949-8958. PMLR, 2020. +Gregor N. C. Simm, Robert Pinsler, Gábor Csányi, and José Miguel Hernández-Lobato. Symmetry-aware actor-critic for 3d molecular design. In International Conference on Learning Representations, 2021. +Daniel G. A. Smith, L. Burns, A. Simmonett, R. Parrish, M. C. Schieber, Raimondas Galvelis, P. Kraus, H. Kruse, Roberto Di Remigio, Asem Alenaizan, A. M. James, S. Lehtola, Jonathon P Misiewicz, et al. Psi4 1.4: Open-source software for high-throughput quantum chemistry. The Journal of chemical physics, 2020. +Jascha Sohl-Dickstein, Eric A Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585, 2015. +Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. +Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In Advances in Neural Information Processing Systems, pp. 11918-11930, 2019. +Yang Song and Stefano Ermon. Improved techniques for training score-based generative models. NeurIPS, 2020. +N. Thomas, T. Smidt, Steven M. Kearnes, Lusann Yang, L. Li, Kai Kohlhoff, and P. Riley. Tensor field networks: Rotation- and translation-equivariant neural networks for 3d point clouds. *ArXiv*, 2018. +M. Weiler, M. Geiger, M. Welling, W. Boomsma, and T. Cohen. 3d steerable cnns: Learning rotationally equivariant features in volumetric data. In NeurIPS, 2018. +Minkai Xu, Shitong Luo, Yoshua Bengio, Jian Peng, and Jian Tang. Learning neural generative dynamics for molecular conformation generation. In International Conference on Learning Representations, 2021a. +Minkai Xu, Wujie Wang, Shitong Luo, Chence Shi, Yoshua Bengio, Rafael Gomez-Bombarelli, and Jian Tang. An end-to-end framework for molecular conformation generation via bilevel programming. arXiv preprint arXiv:2105.07246, 2021b. + +Linfeng Zhang, Jiequn Han, Han Wang, Roberto Car, and Weinan E. Deep Potential Molecular Dynamics: A Scalable Model with the Accuracy of Quantum Mechanics. Physical Review Letters, 120(14):143001, 2018. +Jinhua Zhu, Yingce Xia, Chang Liu, Lijun Wu, Shufang Xie, Tong Wang, Yusong Wang, Wengang Zhou, Tao Qin, Houqiang Li, et al. Direct molecular conformation generation. arXiv preprint arXiv:2202.01356, 2022. + +# A PROOFS + +# A.1 PROPERTIES OF THE DIFFUSION MODEL + +We include proofs for several key properties of the probabilistic diffusion model here to be self-contained. For more detailed discussions, please refer to Ho et al. (2020). Let $\{\beta_0,\dots,\beta_T\}$ be a sequence of variances, and $\alpha_{t} = 1 - \beta_{t}$ and $\bar{\alpha}_t = \prod_{s = 1}^t\alpha_s$ . The two following properties are crucial for deriving the final tractable objective in equation 9. + +Property 1. Tractable marginal of the forward process: + +$$ +q (\mathcal {C} ^ {t} | \mathcal {C} ^ {0}) = \int q (\mathcal {C} ^ {1: t} | \mathcal {C} ^ {0}) d \mathcal {C} ^ {1: (t - 1)} = \mathcal {N} (\mathcal {C} ^ {t}; \sqrt {\bar {\alpha} _ {t}} \mathcal {C} ^ {0}, (1 - \bar {\alpha} _ {t}) I). +$$ + +Proof. Let $\epsilon_{i}$ 's be independent standard Gaussian random variables. Then, by definition of the Markov kernels $q(\mathcal{C}^t |\mathcal{C}^{t - 1})$ in equation 2, we have + +$$ +\begin{array}{l} \mathcal {C} ^ {t} = \sqrt {\alpha_ {t}} \mathcal {C} ^ {t - 1} + \sqrt {\beta_ {t}} \epsilon_ {t} \\ = \sqrt {\alpha_ {t} \alpha_ {t - 1}} \mathcal {C} ^ {t - 2} + \sqrt {\alpha_ {t} \beta_ {t - 1}} \epsilon_ {t - 1} + \sqrt {\beta_ {t}} \epsilon_ {t} \\ = \sqrt {\alpha_ {t} \alpha_ {t - 1} \alpha_ {t - 1}} \mathcal {C} ^ {t - 3} + \sqrt {\alpha_ {t} \alpha_ {t - 1} \beta_ {t - 2}} \epsilon_ {t - 2} + \sqrt {\alpha_ {t} \beta_ {t - 1}} \epsilon_ {t - 1} + \sqrt {\beta_ {t}} \epsilon_ {t} \tag {12} \\ = \dots \\ = \sqrt {\bar {\alpha} _ {t}} \mathcal {C} ^ {0} + \sqrt {\alpha_ {t} \alpha_ {t - 1} \cdots \alpha_ {2} \beta_ {1}} \epsilon_ {1} + \dots + \sqrt {\alpha_ {t} \beta_ {t - 1}} \epsilon_ {t - 1} + \sqrt {\beta_ {t}} \epsilon_ {t} \\ \end{array} +$$ + +Therefore $q(\mathcal{C}^t|\mathcal{C}^0)$ is still Gaussian, and the mean of $\mathcal{C}^t$ is $\sqrt{\bar{\alpha}_t}\mathcal{C}^0$ , and the variance matrix is $(\alpha_{t}\alpha_{t - 1}\dots \alpha_{2}\beta_{1} + \dots +\alpha_{t}\beta_{t - 1} + \beta_{t})I = (1 - \bar{\alpha}_{t})I$ . Then we have: + +$$ +q \left(\mathcal {C} ^ {t} \mid \mathcal {C} ^ {0}\right) = \mathcal {N} \left(\mathcal {C} ^ {t}; \sqrt {\bar {\alpha} _ {t}} \mathcal {C} ^ {0}, (1 - \bar {\alpha} _ {t}) I\right). +$$ + +This property provides convenient closed-form evaluation of $\mathcal{C}^t$ knowing $\mathcal{C}^0$ : + +$$ +\mathcal {C} ^ {t} = \sqrt {\bar {\alpha} _ {t}} \mathcal {C} ^ {0} + \sqrt {1 - \bar {\alpha} _ {t}} \boldsymbol {\epsilon}, +$$ + +where $\epsilon \sim \mathcal{N}(0,I)$ + +Besides, it is worth noting that, + +$$ +q (\mathcal {C} ^ {T} | \mathcal {C} ^ {0}) = \mathcal {N} (\mathcal {C} ^ {T}; \sqrt {\bar {\alpha} _ {T}} \mathcal {C} ^ {0}, (1 - \bar {\alpha} _ {T}) I), +$$ + +where $\bar{\alpha}_T = \prod_{t=1}^T (1 - \beta_t)$ approaches zero with large $T$ , which indicates the diffusion process can finally converge into a whitened noisy distribution. + +Property 2. Tractable posterior of the forward process: + +$$ +q (\mathcal {C} ^ {t - 1} | \mathcal {C} ^ {t}, \mathcal {C} ^ {0}) = \mathcal {N} (\mathcal {C} ^ {t - 1}; \frac {\sqrt {\bar {\alpha} _ {t - 1}} \beta_ {t}}{1 - \bar {\alpha} _ {t}} \mathcal {C} ^ {0} + \frac {\sqrt {\alpha_ {t}} (1 - \bar {\alpha} _ {t - 1})}{1 - \bar {\alpha} _ {t}} \mathcal {C} ^ {t}, \frac {(1 - \bar {\alpha} _ {t - 1})}{1 - \bar {\alpha} _ {t}} \beta_ {t} I). +$$ + +Proof. Let $\tilde{\beta}_t = \frac{1 - \bar{\alpha}_{t-1}}{1 - \bar{\alpha}_t} \beta_t$ , then we can derive the posterior by Bayes rule: + +$$ +\begin{array}{l} q \left(\mathcal {C} ^ {t - 1} \mid \mathcal {C} ^ {t}, \mathcal {C} ^ {0}\right) = \frac {q \left(\mathcal {C} ^ {t} \mid \mathcal {C} ^ {t - 1}\right) q \left(\mathcal {C} ^ {t - 1} \mid \mathcal {C} ^ {0}\right)}{q \left(\mathcal {C} ^ {t} \mid \mathcal {C} ^ {0}\right)} \\ = \frac {\mathcal {N} \left(\mathcal {C} ^ {t} ; \sqrt {\bar {\alpha} _ {t}} \mathcal {C} ^ {t - 1} , \beta_ {t} I\right) \mathcal {N} \left(\mathcal {C} ^ {t - 1} ; \sqrt {\bar {\alpha} _ {t - 1}} \mathcal {C} ^ {0} , (1 - \bar {\alpha} _ {t - 1}) I\right)}{\mathcal {N} \left(\mathcal {C} ^ {t} ; \sqrt {\bar {\alpha} _ {t}} \mathcal {C} ^ {0} , (1 - \bar {\alpha} _ {t}) I\right)} \\ = (2 \pi \beta_ {t}) ^ {- \frac {d}{2}} (2 \pi (1 - \bar {\alpha} _ {t - 1})) ^ {- \frac {d}{2}} (2 \pi (1 - \bar {\alpha} _ {t})) ^ {\frac {d}{2}} \times \\ \exp \left(- \frac {\| \mathcal {C} ^ {t} - \sqrt {\alpha_ {t}} \mathcal {C} ^ {t - 1} \| ^ {2}}{2 \beta_ {t}} - \frac {\| \mathcal {C} ^ {t - 1} - \sqrt {\bar {\alpha} _ {t - 1}} \mathcal {C} ^ {0} \| ^ {2}}{2 (1 - \bar {\alpha} _ {t - 1})} + \frac {\| \mathcal {C} ^ {t} - \sqrt {\bar {\alpha} _ {t}} \mathcal {C} ^ {0} \| ^ {2}}{2 (1 - \bar {\alpha} _ {t})}\right) \\ = \left(2 \pi \tilde {\beta} _ {t}\right) ^ {- \frac {d}{2}} \exp \left(- \frac {1}{2 \tilde {\beta} _ {t}} \left\| \mathcal {C} ^ {t - 1} - \frac {\sqrt {\bar {\alpha} _ {t - 1}} \beta_ {t}}{1 - \bar {\alpha} _ {t}} \mathcal {C} ^ {0} - \frac {\sqrt {\alpha_ {t}} (1 - \bar {\alpha} _ {t - 1})}{1 - \bar {\alpha} _ {t}} \mathcal {C} ^ {t} \right\| ^ {2}\right) \tag {13} \\ \end{array} +$$ + +Then we have the posterior $q(\mathcal{C}^{t - 1}|\mathcal{C}^t,\mathcal{C}^0)$ as the given form. + +# A.2 PROOF OF PROPOSITION 1 + +Let $T_{g}$ be some roto-translational transformations of a group element $g \in \mathrm{SE}(3)$ , and let $p(x_{T})$ be a density which is SE(3)-invariant, i.e., $p(x_{T}) = p(T_{g}(x_{T}))$ . If the Markov transitions $p(x_{t - 1}|x_t)$ are SE(3)-equivariant, i.e., $p(x_{t - 1}|x_t) = p(T_g(x_{t - 1})|T_g(x_t))$ , then we have that the density $p_{\theta}(x_0) = \int p(x_T)p_{\theta}(x_{0:T - 1}|x_T)\mathrm{d}\pmb{x}_{1:T}$ is also SE(3)-invariant. + +Proof. + +$$ +\begin{array}{l} p _ {\theta} \left(T _ {g} \left(x _ {0}\right)\right) = \int p \left(T _ {g} \left(x _ {T}\right)\right) p _ {\theta} \left(T _ {g} \left(x _ {0: T - 1}\right) \mid T _ {g} \left(x _ {T}\right)\right) \mathrm {d} \boldsymbol {x} _ {1: T} \\ = \int p \left(T _ {g} \left(x _ {T}\right)\right) \Pi_ {t = 1} ^ {T} p _ {\theta} \left(T _ {g} \left(x _ {t - 1}\right) \mid T _ {g} \left(x _ {t}\right)\right) \mathrm {d} \boldsymbol {x} _ {1: T} \\ = \int p \left(x _ {T}\right) \Pi_ {t = 1} ^ {T} p _ {\theta} \left(T _ {g} \left(x _ {t - 1}\right) \mid T _ {g} \left(x _ {t}\right)\right) \mathrm {d} x _ {1: T} \quad (\text {i n v a r i a n t p r i o r} p \left(x _ {T}\right)) \tag {14} \\ = \int p \left(x _ {T}\right) \Pi_ {t = 1} ^ {T} p _ {\theta} \left(x _ {t - 1} \mid x _ {t}\right) d \boldsymbol {x} _ {1: T} \quad \text {(e q u i v a r i a n t k e r n e l s} p \left(x _ {t - 1} \mid x _ {t}\right)) \\ = \int p (x _ {T}) p _ {\theta} \left(x _ {0: T - 1} \mid x _ {T}\right) d \boldsymbol {x} _ {1: T} \\ = p _ {\theta} \left(x _ {0}\right) \\ \end{array} +$$ + +![](images/c443809bcb4f81daec2dcce3689c9fb05af18e675d5c57824e70915f3cbbfbc1.jpg) + +# A.3 PROOF OF PROPOSITION 2 + +In this section we prove that the output $\mathbf{x}$ of GFN defined in equation 5, 6 and 7 is translationally invariant and rotationally equivariant with the input $\mathcal{C}$ . Let $g\in \mathbb{R}^3$ denote any translation transformations and orthogonal matrices $R\in \mathbb{R}^{3\times 3}$ denote any rotation transformations. Let $R\mathbf{x}$ be shorthand for $(R\mathbf{x}_1,\dots ,R\mathbf{x}_N)$ . Formally, we aim to prove that the model satisfies: + +$$ +R \mathbf {x} ^ {l + 1}, \mathbf {h} ^ {l + 1} = \operatorname {G F N} \left(R \mathbf {x} ^ {l}, R \mathcal {C} + g, \mathbf {h} ^ {l}\right). \tag {15} +$$ + +This equation indicates that, given $\mathbf{x}^l$ already rotationally equivalent with $\mathcal{C}$ , and $\mathbf{h}^l$ already invariant, then such property can propagate through a single GFN layer to $\mathbf{x}^{l + 1}$ and $\mathbf{h}^{l + 1}$ . + +Proof. Firstly, given that $\mathbf{h}^l$ already invariant to SE(3) transformations, we have that the messages $\mathbf{m}_{ij}$ calculated from equation 5 will also be invariant. This is because it sorely relies on the distance between two atoms, which are manifestly invariant to rotations $\| R\mathbf{x}_i^l -R\mathbf{x}_j^l\| ^2 = (\mathbf{x}_i^l -$ $\mathbf{x}_j^l)\top R^\top R(\mathbf{x}_i^l -\mathbf{x}_j^l) = (\mathbf{x}_i^l -\mathbf{x}_j^l)^\top I(\mathbf{x}_i^l -\mathbf{x}_j^l) = \| \mathbf{x}_i^l -\mathbf{x}_j^l\| ^2$ . Formally, the invariance of messages in equation 5 can be written as: + +$$ +\mathbf {m} _ {i, j} = \Phi_ {m} \left(\mathbf {h} _ {i} ^ {l}, \mathbf {h} _ {j} ^ {l}, \left\| R \mathbf {x} _ {i} ^ {l} - R \mathbf {x} _ {j} ^ {l} \right\| ^ {2}, e _ {i j}\right) = \Phi_ {m} \left(\mathbf {h} _ {i} ^ {l}, \mathbf {h} _ {j} ^ {l}, \left\| \mathbf {x} _ {i} ^ {l} - \mathbf {x} _ {j} ^ {l} \right\| ^ {2}, e _ {i j}\right). \tag {16} +$$ + +And similarly, the $\mathbf{h}^{t + 1}$ updated from equation 6 will also be invariant. + +Next, we prove that the vector $\mathbf{x}$ updated from equation 7 preserves rotational equivariance and translational invariance. Given $\mathbf{m}_{ij}$ already invariant as proven above, we have that: + +$$ +\sum_ {j \in \mathcal {N} (i)} \frac {1}{d _ {i j}} \left(R \mathbf {c} _ {i} + g - R \mathbf {c} _ {j} - g\right) \Phi_ {x} (\mathbf {m} _ {i, j}) = R \sum_ {j \in \mathcal {N} (i)} \frac {1}{d _ {i j}} \left(\mathbf {c} _ {i} - \mathbf {c} _ {j}\right) \Phi_ {x} (\mathbf {m} _ {i, j}) = R \mathbf {x} _ {i} ^ {l + 1}. \tag {17} +$$ + +Therefore, we have that rotating and translating $\mathbf{c}$ results in the same rotation and no translation on $\mathbf{x}^{l + 1}$ by updating through equation 7. + +Thus we can conclude that the property defined in equation 15 is satisfied. + +![](images/70eedaa13c83f8e76aa259e2323b125ab779f990ccbeea689aee5db13770a9b9.jpg) + +Having proved the equivariance property of a single GFN layer, then inductively, we can draw conclusion that a composition of $L$ GFN layers will also preserve the same equivariance. + +# A.4 PROOF OF PROPOSITION 3 + +We first derive the variational lower bound (ELBO) objective in equation 8. The ELBO can be calculated as follows: + +$$ +\begin{array}{l} \mathbb {E} \log p _ {\theta} (\mathcal {C} ^ {0} | \mathcal {G}) = \mathbb {E} \log \mathbb {E} _ {q (\mathcal {C} ^ {1: T} | \mathcal {C} ^ {0})} \left[ \frac {p _ {\theta} (\mathcal {C} ^ {0 : T - 1} | \mathcal {G} , \mathcal {C} ^ {T}) \times p (\mathcal {C} ^ {T})}{q (\mathcal {C} ^ {1: T} | \mathcal {C} ^ {0})} \right] \\ \geq \mathbb {E} _ {q} \log \frac {p _ {\theta} (\mathcal {C} ^ {0 : T - 1} | \mathcal {G} , \mathcal {C} ^ {T}) \times p (\mathcal {C} ^ {T})}{q (\mathcal {C} ^ {1 : T} | \mathcal {C} ^ {0})} \\ = \mathbb {E} _ {q} \left[ \log p \left(\mathcal {C} ^ {T}\right) - \sum_ {t = 1} ^ {T} \log \frac {p _ {\theta} \left(\mathcal {C} ^ {t - 1} \mid \mathcal {G} , \mathcal {C} ^ {t}\right)}{q \left(\mathcal {C} ^ {t} \mid \mathcal {C} ^ {t - 1}\right)} \right] \\ = \mathbb {E} _ {q} \Big [ \log p (\mathcal {C} ^ {T}) - \log \frac {p _ {\theta} (\mathcal {C} ^ {0} | \mathcal {G} , \mathcal {C} ^ {1})}{q (\mathcal {C} ^ {1} | \mathcal {C} ^ {0})} - \sum_ {t = 2} ^ {T} \Big (\log \frac {p _ {\theta} (\mathcal {C} ^ {t - 1} | \mathcal {G} , \mathcal {C} ^ {t})}{q (\mathcal {C} ^ {t - 1} | \mathcal {C} ^ {t} , \mathcal {C} ^ {0})} + \log \frac {q (\mathcal {C} ^ {t - 1} | \mathcal {C} ^ {0})}{q (\mathcal {C} ^ {t} | \mathcal {C} ^ {0})} \Big) \Big ] \\ = \mathbb {E} _ {q} \left[ \log \frac {p (\mathcal {C} ^ {T})}{q (\mathcal {C} ^ {T} | \mathcal {C} ^ {0})} - \log p _ {\theta} (\mathcal {C} ^ {0} | \mathcal {G}, \mathcal {C} ^ {1}) - \sum_ {t = 2} ^ {T} \log \frac {p _ {\theta} (\mathcal {C} ^ {t - 1} | \mathcal {G} , \mathcal {C} ^ {t})}{q (\mathcal {C} ^ {t - 1} | \mathcal {C} ^ {t} , \mathcal {C} ^ {0})} \right] \\ = - \mathbb {E} _ {q} \left[ \mathrm {K L} \left(q \left(\mathcal {C} ^ {T} \mid \mathcal {C} ^ {0}\right) \| p \left(\mathcal {C} ^ {T}\right)\right) + \sum_ {t = 2} ^ {T} \mathrm {K L} \left(q \left(\mathcal {C} ^ {t - 1} \mid \mathcal {C} ^ {t}, \mathcal {C} ^ {0}\right) \| p _ {\theta} \left(\mathcal {C} ^ {t - 1} \mid \mathcal {G}, \mathcal {C} ^ {t}\right)\right) - \log p _ {\theta} \left(\mathcal {C} ^ {0} \mid \mathcal {G}, \mathcal {C} ^ {1}\right) \right]. \tag {18} \\ \end{array} +$$ + +It can be noted that the first term $\mathrm{KL}\left(q(\mathcal{C}^T|\mathcal{C}^0)\| p(\mathcal{C}^T)\right)$ is a constant, which can be omitted in the objective. Furthermore, for brevity, we also merge the final term $\log p_{\theta}(\mathcal{C}^0 |\mathcal{G},\mathcal{C}^1)$ into the second term (sum over KL divergences), and finally derive that $\mathcal{L}_{\mathrm{ELBO}} = \sum_{t = 1}^{T}D_{\mathrm{KL}}(q(\mathcal{C}^{t - 1}|\mathcal{C}^t,\mathcal{C}^0)\| p_{\theta}(\mathcal{C}^{t - 1}|\mathcal{G},\mathcal{C}^t))$ as in equation 8. + +Now we consider how to compute the KL divergences as the proposition 3. Since both $q(\mathcal{C}^{t - 1}|\mathcal{C}^t,\mathcal{C}^0)$ and $p_{\theta}(\mathcal{C}^{t - 1}|\mathcal{G},\mathcal{C}^t)$ are Gaussian share the same covariance matrix $\tilde{\beta}_tI$ , the KL divergence between them can be calculated by the squared $\ell_2$ distance between their means weighed by a certain weights $\frac{1}{2\tilde{\beta}_t}$ . By the expression of $q(\mathcal{C}^t |\mathcal{C}^0)$ , we have the reparameterization that $\bar{\mathcal{C}}^t = \sqrt{\bar{\alpha}_t}\bar{\mathcal{C}}^0 + \sqrt{1 - \bar{\alpha}_t}\epsilon$ . + +Then we can derive: + +$$ +\begin{array}{l} \mathbb {E} _ {q} \operatorname {K L} \left(q \left(\mathcal {C} ^ {t - 1} | \mathcal {C} ^ {t}, \mathcal {C} ^ {0}\right) \| p _ {\theta} (\mathcal {G}, \mathcal {C} ^ {t - 1} | \mathcal {C} ^ {t})\right) \\ = \frac {1}{2 \tilde {\beta} _ {t}} \mathbb {E} _ {\mathcal {C} ^ {0}} \left\| \frac {\sqrt {\bar {\alpha} _ {t - 1}} \beta_ {t}}{1 - \bar {\alpha} _ {t}} \mathcal {C} ^ {0} + \frac {\sqrt {\alpha_ {t}} (1 - \bar {\alpha} _ {t - 1})}{1 - \bar {\alpha} _ {t}} \mathcal {C} ^ {t} - \frac {1}{\sqrt {\alpha_ {t}}} \left(\mathcal {C} ^ {t} - \frac {\beta_ {t}}{\sqrt {1 - \bar {\alpha} _ {t}}} \epsilon_ {\theta} (\mathcal {C} ^ {t}, \mathcal {G}, t)\right) \right\| ^ {2} \\ = \frac {1}{2 \tilde {\beta} _ {t}} \mathbb {E} _ {\mathcal {C} ^ {0}, \epsilon} \left\| \frac {\sqrt {\bar {\alpha} _ {t - 1}} \beta_ {t}}{1 - \bar {\alpha} _ {t}} \cdot \frac {\mathcal {C} ^ {t} - \sqrt {1 - \bar {\alpha} _ {t}} \epsilon}{\sqrt {\bar {\alpha} _ {t}}} + \frac {\sqrt {\alpha_ {t}} (1 - \bar {\alpha} _ {t - 1})}{1 - \bar {\alpha} _ {t}} \mathcal {C} ^ {t} - \frac {1}{\sqrt {\alpha_ {t}}} \left(\mathcal {C} ^ {t} - \frac {\beta_ {t}}{\sqrt {1 - \bar {\alpha} _ {t}}} \epsilon_ {\theta} (\mathcal {C} ^ {t}, \mathcal {G}, t)\right) \right\| ^ {2} \\ = \frac {1}{2 \bar {\beta} _ {t}} \cdot \frac {\beta_ {t} ^ {2}}{\alpha_ {t} (1 - \bar {\alpha} _ {t})} \mathbb {E} _ {\mathcal {C} ^ {0}, \epsilon} \left\| 0 \cdot \mathcal {C} ^ {t} + \epsilon - \epsilon_ {\theta} (\mathcal {C} ^ {t}, \mathcal {G}, t) \right\| ^ {2} \\ = \frac {\beta_ {t} ^ {2}}{2 \frac {1 - \bar {\alpha} _ {t - 1}}{1 - \bar {\alpha} _ {t}} \beta_ {t} \alpha_ {t} (1 - \bar {\alpha} _ {t})} \mathbb {E} _ {\mathcal {C} ^ {0}, \epsilon} \left\| \epsilon - \epsilon_ {\theta} (\mathcal {C} ^ {t}, \mathcal {G}, t) \right\| ^ {2} \\ = \gamma_ {t} \mathbb {E} _ {\mathcal {C} ^ {0}, \epsilon} \left\| \epsilon - \epsilon_ {\theta} \left(\mathcal {C} ^ {t}, t\right) \right\| ^ {2}, \tag {19} \\ \end{array} +$$ + +where $\gamma_{t}$ represent the wights $\frac{\beta_t}{2\alpha_t(1 - \bar{\alpha}_{t - 1})}$ . And we finish the proof. + +# A.5 ANALYSIS OF THE INVARIANT DENSITY IN SEC. 4.2 + +Given a geometric system $x \in \mathbb{R}^{N \cdot 3}$ , we obtain the CoM-free $\hat{x}$ by subtracting its CoM. This can be considered as a linear transformation: + +$$ +\hat {x} = Q x, \text {w h e r e} Q = I _ {3} \otimes \left(I _ {N} - \frac {1}{N} \mathbf {1} _ {N} \mathbf {1} _ {N} ^ {T}\right) \tag {20} +$$ + +where $I_{k}$ denotes the $k\times k$ identity matrix and $\mathbf{1}_k$ denotes the $k$ -dimensional vector filled with ones. It can be noted that $Q$ is a symmetric projection operator, i.e., $Q^{2} = Q$ and $Q^T = Q$ . And we also + +have that $\mathrm{rank}[Q] = (N - 1)\cdot 3$ . Furthermore, let $U$ represent the space of CoM-free systems, we can easily have that $Qy = y$ for any $y\in U$ since the CoM of $y$ is already zero. + +Formally, let $n = N \cdot 3$ and set $\mathbb{R}^n$ with an isotropic normal distribution $\rho = \mathcal{N}(0, I_n)$ , then the CoM-free density can be formally written as $\hat{\rho} = \mathcal{N}(0, QI_nQ^T) = \mathcal{N}(0, QQ^T)$ . Thus, sampling from $\hat{\rho}$ can be trivially achieved by sampling from $\rho$ and then projecting with $Q$ . And $\hat{\rho}(y)$ can be calculated by $\rho(y)$ since for any $y \in U$ we have $\|y\|_2^2 = \|Qy\|_2^2$ , and thus $\rho(y) = \hat{\rho}(y)$ . + +And in this paper, with the SE(3)-equivariant Markov kernels of the reverse process, any CoM-free system will transit to another CoM-free system. And thus we can induce a well-defined Markov chain on the subspace spanned by $Q$ . + +# B OTHER RELATED WORK + +Protein structure generation. There has also been many recent works working on protein structure folding. For example, Boltzmann generators Noé et al. (2019) use flow-based models to generate the structure of protein main chains. AlQuraishi (2019) uses recurrent networks to model the amino acid sequences. Ingraham et al. (2019) proposed neural networks to learn an energy simulator to infer the protein structures. Most recently, AlphaFold Senior et al. (2020); Jumper et al. (2021) has significantly improved the performance of protein structure generation. Nevertheless, proteins are mainly linear backbone structures while general molecules are highly branched with various rings, making protein folding approaches unsuitable for our setting. + +Point cloud generation. Recently, some other works (Luo & Hu, 2021; Chibane et al., 2020) has also been proposed for 3D structure generation with diffusion-based models, but focus on the point cloud problem. Unfortunately, in general, point clouds are not considered as graphs with various atom and bond information, and equivariance is also not widely considered, making these methods fundamentally different from our model. + +# C EXPERIMENT DETAILS + +In this section, we introduce the details of our experiments. In practice, the means $\epsilon_{\theta}$ are parameterized as compositions of both typical invariant MPNNs (Schütt et al., 2017) and the proposed equivariant GFNs in Sec. 4.2. As a default setup, the MPNNs for parameterizing the means $\epsilon_{\theta}$ are all implemented with 4 layers, and the hidden embedding dimension is set as 128. After the MPNNs, we can obtain the informative invariant atom embeddings, which we denote as $\mathbf{h}^0$ . Then the embeddings $\mathbf{h}^0$ are fed into equivariant layers and updated with equation 5, equation 6, and equation 7 to obtain the equivariant output. For the training of GEODIFF, we train the model on a single Tesla V100 GPU with a learning rate of 0.001 until convergence and Adam (Kingma & Welling, 2013) as the optimizer. The practical training time is $\sim 48$ hours. The other hyper-parameters of GEODIFF are summarized in Tab. 4, including highest variance level $\beta_{T}$ , lowest variance level $\beta_{T}$ , the variance schedule, number of diffusion timesteps $T$ , radius threshold for determining the neighbor of atoms $\tau$ , batch size, and number of training iterations. + +Table 4: Additional hyperparameters of our GEODIFF. + +
Taskβ1βTβ schedulerTτBatch SizeTrain Iter.
QM91e-72e-3sigmoid500010Å641M
Drugs1e-72e-3sigmoid500010Å321M
+ +# D ADDITIONAL EXPERIMENTS + +# D.1 RESULTS FOR GEOM-QM9 + +The results on the GEOM-QM9 dataset are reported in Tab. 5. + +Table 5: Results on the GEOM-QM9 dataset, without FF optimization. + +
ModelsCOV-R (%) ↑MAT-R (Å) ↓COV-P (%) ↑MAT-P (Å) ↓
MeanMedianMeanMedianMeanMedianMeanMedian
CVGAE0.090.001.67131.6088----
GRAPHDG73.3384.210.42450.397343.9035.330.58090.5823
CGCF78.0582.480.42190.390036.4933.570.66150.6427
CONFVAE77.8488.200.41540.373938.0234.670.62150.6091
GEOMOL71.2672.000.37310.3731----
CONFGF88.4994.310.26730.268546.4343.410.52240.5124
GEODIFF-A90.5494.610.21040.202152.3550.100.45390.4399
GEODIFF-C90.0793.390.20900.198852.7950.290.44480.4267
+ +Table 6: Additional results on the GEOM-Drugs dataset, without FF optimization. + +
ModelsCOV-R (%) ↑MAT-R (Å) ↓COV-P (%) ↑MAT-P (Å) ↓
MeanMedianMeanMedianMeanMedianMeanMedian
GEODIFF (T=1000)82.9696.290.95250.933448.2746.031.32051.2724
+ +# D.2 ABLATION STUDY WITH FEWER DIFFUSION STEPS + +We also test our method with fewer diffusion steps. Specifically, we test the setting with $T = 1000$ , $\beta_{1} = 1\mathrm{e - }7$ and $\beta_{T} = 9\mathrm{e - }3$ . The results on the more challenging Drugs dataset are shown in Tab. 6. Compared with the results in Tab. 1, we can observe that when setting the diffusion steps as 1000, though slightly weaker than the performance with 5000 decoding steps, the model can already outperforms all existing baselines. Note that, the most competitive baseline CONFIGGF (Shi et al., 2021) also requires 5000 sampling steps, which indicates that our model can achieve better performance with fewer computational costs compared with the state-of-the-art method. + +# E MORE VISUALIZATIONS + +We provide more visualization of generated structures in Fig. 3. The molecules are chosen from the test split of GEOM-Drugs dataset. + +![](images/bee99e9c06b033791aa614e834e991e365574e622a86cca3e46c1c1c1c105355.jpg) +Figure 3: Visualization of drug-like conformations generated by GEODIFF. \ No newline at end of file diff --git a/geodiffageometricdiffusionmodelformolecularconformationgeneration/images.zip b/geodiffageometricdiffusionmodelformolecularconformationgeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e4625b3d96c6df2b4adec30b0b917e4b96f4a131 --- /dev/null +++ b/geodiffageometricdiffusionmodelformolecularconformationgeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8752c9ffb0ba08b1e33fd34ace791c7b7afd74ca34410f8790552d60a2d2a072 +size 771858 diff --git a/geodiffageometricdiffusionmodelformolecularconformationgeneration/layout.json b/geodiffageometricdiffusionmodelformolecularconformationgeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c50f7f3eb3e8fbe4dd20365462feed6c349a1ef0 --- /dev/null +++ b/geodiffageometricdiffusionmodelformolecularconformationgeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:016a00f73fbe20c693a2d8de27b581c9c9e13384ae8b6fea7b46ec6d7b193ca9 +size 704204 diff --git a/hyperparametertuningwithrenyidifferentialprivacy/1bf72202-f444-471f-8235-58577faa2746_content_list.json b/hyperparametertuningwithrenyidifferentialprivacy/1bf72202-f444-471f-8235-58577faa2746_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b692bfaab6b758ac895641ba6d3d3e1d906fcb6a --- /dev/null +++ b/hyperparametertuningwithrenyidifferentialprivacy/1bf72202-f444-471f-8235-58577faa2746_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c7189e98e35acdb55f3cbdf11ae13515cba40f00228620583431faa06c55b11 +size 216417 diff --git a/hyperparametertuningwithrenyidifferentialprivacy/1bf72202-f444-471f-8235-58577faa2746_model.json b/hyperparametertuningwithrenyidifferentialprivacy/1bf72202-f444-471f-8235-58577faa2746_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0d5ec8c8d530bebfcc0fedfed4bc1c8e3827b0c4 --- /dev/null +++ b/hyperparametertuningwithrenyidifferentialprivacy/1bf72202-f444-471f-8235-58577faa2746_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45ca36df3759d5d8731e42f9786692911fe6fd3a88c599c8367528d4777e7384 +size 246923 diff --git a/hyperparametertuningwithrenyidifferentialprivacy/1bf72202-f444-471f-8235-58577faa2746_origin.pdf b/hyperparametertuningwithrenyidifferentialprivacy/1bf72202-f444-471f-8235-58577faa2746_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8fe69e808832f37e0c0b6577f6a6362581459ef9 --- /dev/null +++ b/hyperparametertuningwithrenyidifferentialprivacy/1bf72202-f444-471f-8235-58577faa2746_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1020b8afe5de97fbfd1fe6870a0567f8aa8170ee3c9fa1dc59e1cd3023137822 +size 684373 diff --git a/hyperparametertuningwithrenyidifferentialprivacy/full.md b/hyperparametertuningwithrenyidifferentialprivacy/full.md new file mode 100644 index 0000000000000000000000000000000000000000..189fd22d3d9bcba78e51e3934a72f67334c308de --- /dev/null +++ b/hyperparametertuningwithrenyidifferentialprivacy/full.md @@ -0,0 +1,998 @@ +# HYPERPARAMETER TUNING WITH RENYI DIFFERENTIAL PRIVACY + +Nicolas Papernot* + +Google Research, Brain Team +papernot@google.com + +Thomas Steinke* + +Google Research, Brain Team +hyper@thomas-steinke.net + +# ABSTRACT + +For many differentially private algorithms, such as the prominent noisy stochastic gradient descent (DP-SGD), the analysis needed to bound the privacy leakage of a single training run is well understood. However, few studies have reasoned about the privacy leakage resulting from the multiple training runs needed to fine tune the value of the training algorithm's hyperparameters. In this work, we first illustrate how simply setting hyperparameters based on non-private training runs can leak private information. Motivated by this observation, we then provide privacy guarantees for hyperparameter search procedures within the framework of Renyi Differential Privacy. Our results improve and extend the work of Liu and Talwar (STOC 2019). Our analysis supports our previous observation that tuning hyperparameters does indeed leak private information, but we prove that, under certain assumptions, this leakage is modest, as long as each candidate training run needed to select hyperparameters is itself differentially private. + +# 1 INTRODUCTION + +Machine learning (ML) systems memorize training data and regurgitate excerpts from it when probed (Carlini et al., 2020). If the training data includes sensitive personal information, then this presents an unacceptable privacy risk (Shokri et al., 2017). It may however still be useful to apply machine learning to such data, e.g., in the case of healthcare (Kourou et al., 2015; Wiens et al., 2019). This has led to a significant body of research on the development of privacy-preserving machine learning methods. Differential privacy (DP) (Dwork et al., 2006b;a) provides a robust and quantitative privacy guarantee. It has been widely accepted as the best framework for formally reasoning about the privacy guarantees of a machine learning algorithm. + +A popular method for ensuring DP is noisy (stochastic) gradient descent (a.k.a. DP-SGD) (Song et al., 2013; Bassily et al., 2014; Abadi et al., 2016). DP-SGD differs from standard (stochastic) gradient descent in three ways. First, gradients are computed on a per-example basis rather than directly averaged across a minibatch of training examples. Second, each of these individual gradients is clipped to ensure its 2-norm is bounded. Third, Gaussian noise is added to the gradients as they are averaged and applied to update model parameters. These modifications bound the sensitivity of each update so that the added noise ensures differential privacy. The composition (Dwork et al., 2010) and privacy amplification by subsampling (Balle et al., 2018) properties of differential privacy thus imply that the overall training procedure is differentially private. We can compute tight privacy loss bounds for DP-SGD using techniques like the Moments Accountant (Abadi et al., 2016) or the closely related framework of Renyi DP (Mironov, 2017; Mironov et al., 2019). + +Machine learning systems have hyperparameters, such as the learning rate, minibatch size, or choice of a regularizer to prevent overfitting. Details of the model architecture can also be treated as hyperparameters of the optimization problem. Furthermore, learning within the constraints of differential privacy may introduce additional hyperparameters, as illustrated in the DP-SGD optimizer by the 2-norm bound value for clipping, the scale of the Gaussian noise, and the choice of stopping time. Typically the training procedure is repeated many times with different hyperparameter settings in order to select the best setting, an operation known as hyperparameter tuning. This methodology implies that even if each run of the training procedure is privacy-preserving, we need to take into + +account the fact that the training procedure is repeated (possibly many times) when reasoning about the privacy of the overall learning procedure. + +Can the tuning of hyperparameters reveal private information? This question has received remarkably little attention and, in practice, it is often ignored entirely. We study this question and provide both positive and negative answers. + +# 1.1 OUR CONTRIBUTIONS + +- We show that, under certain circumstances, the setting of hyperparameters can leak private information. Hyperparameters are a narrow channel for private information to leak through, but they can still reveal information about individuals if we are careless. Specifically, if we tune the hyperparameters in an entirely non-private fashion, then individual outliers can noticeably skew the optimal hyperparameter settings. This is sufficient to reveal the presence or absence of these outliers à la membership inference (Shokri et al., 2017); it shows that we must exercise care when setting hyperparameters. +- We provide tools for ensuring that the selection of hyperparameters is differentially private. Specifically, if we repeat the training procedure multiple times (with different hyperparameters) and each repetition of the training procedure is differentially private on its own, then outputting the best repetition is differentially private. Of course, a basic version of such a result follows from the composition properties of differential privacy (that is the fact that one can "sum" the privacy loss bounds of multiple differentially private analyses performed on the same data to bound the overall privacy loss from analyzing this data). However, we provide quantitatively sharper bounds. + +Specifically, our privacy loss bounds are either independent of the number of repetitions or grow logarithmically in the number of repetitions, whereas composition would give linear bounds. Rather than repeating the training procedure a fixed number of times, our results require repeating the training procedure a random number of times. The privacy guarantees depend on the distribution of the number of runs; we consider several distributions and provide generic results. We discover a tradeoff between the privacy parameters and how heavy-tailed the distribution of the number of repetitions is. + +# 1.2 BACKGROUND AND RELATED WORK + +Differential privacy (DP) is a framework to reason about the privacy guarantees of randomized algorithms which analyze data (Dwork et al., 2006b;a). An algorithm is said to be DP if its outputs on any pair of datasets that only differ on one individual's data are indistinguishable. A bound on this indistinguishability serves as the quantification for privacy. Formally, a randomized algorithm $M: \mathcal{X}^n \to \mathcal{Y}$ is $(\varepsilon, \delta)$ -DP if for any inputs $x, x' \in \mathcal{X}^n$ differing only on the addition, removal, or replacement of one individual's records and for any subset of outputs $S \subseteq \mathcal{Y}$ , we have + +$$ +\mathbb {P} [ M (x) \in S ] \leq e ^ {\varepsilon} \mathbb {P} [ M \left(x ^ {\prime}\right) \in S ] + \delta . \tag {1} +$$ + +Here, the parameter $\varepsilon$ is known as the privacy loss bound – the smaller $\varepsilon$ is, the stronger the privacy guarantee provided is, because it is hard for an adversary to distinguish the outputs of the algorithm on two adjacent inputs. The parameter $\delta$ is essentially the probability that the guarantee fails to hold. One of the key properties of DP is that it composes: running multiple independent DP algorithms is also DP and composition theorems allow us to bound the privacy parameters of such a sequence of mechanisms in terms of the individual mechanisms' privacy parameters (Dwork & Roth, 2014). + +There is a vast literature on differential privacy in machine learning. A popular tool is the DP-SGD optimizer (Abadi et al., 2016). Because the noise added is Gaussian and DP-SGD applies the same (differentially private) training step sequentially, it is easier to reason about its privacy guarantees in the framework of Rényi differential privacy (Mironov, 2017). Rényi differential privacy (RDP) generalizes pure differential privacy (with $\delta = 0$ ) as follows. An algorithm $M$ is said to be $(\lambda, \varepsilon)$ -RDP with $\lambda \geq 1$ and $\varepsilon \geq 0$ , if for any adjacent inputs $x, x'$ + +$$ +\mathrm {D} _ {\lambda} \left(M (x) \| M \left(x ^ {\prime}\right)\right) := \frac {1}{\lambda - 1} \log_ {Y \leftarrow M (x)} \mathbb {E} \left[ \left(\frac {\mathbb {P} [ M (x) = Y ]}{\mathbb {P} [ M \left(x ^ {\prime}\right) = Y ]}\right) ^ {\lambda - 1} \right] \leq \varepsilon , \tag {2} +$$ + +where $\mathrm{D}_{\lambda}(P\| Q)$ is the Rényi divergence of order $\lambda$ between distributions $P$ and $Q$ . In the framework of RDP, one obtains sharp and simple composition: If each individual mechanism $M_{i}$ is $(\lambda ,\varepsilon_{i})$ -RDP, + +then the composition of running all of the mechanisms on the data satisfies $(\lambda, \sum_{i} \varepsilon_{i})$ -RDP. For instance, the privacy analysis of DP-SGD first analyzes the individual training steps then applies composition. Note that it is common to keep track of multiple orders $\lambda$ in the analysis. Thus $\varepsilon$ should be thought of as a function $\varepsilon(\lambda)$ , rather than a single number. In many cases, such as Gaussian noise addition, this is a linear function - i.e., $\varepsilon(\lambda) = \rho \cdot \lambda$ for some $\rho \in \mathbb{R}$ and such a linear bound yields the definition of zero-Concentrated DP with parameter $\rho$ ( $\rho$ -zCDP) (Bun & Steinke, 2016). + +One could naively extend this composition-based approach to analyze the privacy of a training algorithm which involves hyperparameter tuning. Indeed, if each training run performed to evaluate one candidate set of hyperparameter values is DP, the overall procedure is also DP by composition over all the hyperparameter values tried. However, this would lead to very loose guarantees of privacy. Chaudhuri & Vinterbo (2013) were the first to obtain improved DP bounds for hyperparameter tuning, but their results require a stability property of the learning algorithm. The only prior work that has attempted to obtain tighter guarantees for DP hyperparameter tuning in a black-box fashion is the work of Liu & Talwar (2019). Their work is the starting point for ours. + +Liu & Talwar (2019) show that, if we start with a $(\varepsilon, 0)$ -DP algorithm, repeatedly run it a random number of times following a geometric distribution, and finally return the best output produced by these runs, then this system satisfies $(3\varepsilon, 0)$ -differential privacy. Liu & Talwar (2019) also consider algorithms satisfying $(\varepsilon, \delta)$ -DP for $\delta > 0$ . However, their analysis is restricted to the $(\varepsilon, \delta)$ formulation of DP and they do not give any results for Rényi DP. This makes it difficult to apply these results to modern DP machine learning systems, such as models trained with DP-SGD. + +Our results directly improve on the results of Liu & Talwar (2019). We show that replacing the geometric distribution on the number of repetitions in their result with the logarithmic distribution yields $(2\varepsilon, 0)$ -differential privacy as the final result. We also consider other distributions on the number of repetitions, which give a spectrum of results. We simultaneously extend these results to the Rényi DP framework, which yields sharper privacy analyses. + +Independently Mohapatra et al. (2021) study adaptive hyperparameter tuning under DP with composition. In contrast, our results are for non-adaptive hyperparameter tuning, i.e., "random search." + +A closely related line of work is on the problem of private selection. Well-known algorithms for private selection include the exponential mechanism (McSherry & Talwar, 2007) and the sparse vector technique (Dwork et al., 2009; Zhu & Wang, 2020). However, this line of work assumes that there is a low-sensitivity function determining the quality of each of the options. This is usually not the case for hyperparameters. Our results simply treat the ML algorithm as a black box; we only assume that its output is private and make no assumptions about how that output was generated. Our results also permit returning the entire trained model along with the selected hyperparameters. + +# 2 MOTIVATION + +A hyperparameter typically takes categorical values (e.g., the choice of activation function in a neural network layer), or is a single number (e.g., a real number for the learning rate or an integer for the number of epochs). Thus, it is intuitive that a hyperparameter provides little capacity as a channel to leak private information from the training data. Nevertheless, leakage can happen, in particular when training is done without preserving privacy. We illustrate how with the constructed example of hyperparameter tuning for a support vector machine learning (SVM) learned from a synthetic data distribution. We consider a SVM with a soft margin; we use stochastic gradient descent to minimize the corresponding objective involving the hinge loss and a weight penalty: + +$$ +l _ {w, b} (x, y) = \left\| w \right\| _ {2} ^ {2} + \alpha \max \{0, 1 - y (w \cdot x + b) \} +$$ + +where $y \in \{-1, 1\}$ indicates the label of training example $x \in \mathbb{R}^2$ . Because our purpose here is to illustrate how leakage of private information can arise from hyperparameter tuning, we work with a synthetic data distribution for simplicity of exposition: we draw 40 inputs from isotropic 2D Gaussians of standard deviation 1.0 to form the training set $\mathcal{D}$ . The negative class is sampled from a Gaussian centered at $\mu_{-1} = (7.86, -3.36)$ and the positive at $\mu_1 = (6.42, -9.17)$ . + +Our learning procedure has a single hyperparameter $\alpha$ controlling how much importance is given to the hinge loss, i.e., how much the SVM is penalized for using slack variables to misclassify some of + +the training data. We first tune the value of $\alpha$ with the training set $\mathcal{D}$ and report training accuracy as a function of $\alpha$ in Figure 1. Next, we repeat this experiment on a dataset $\mathcal{D}'$ to which we added 8 outliers $x_{o} = (7.9, -8.0)$ to the negative class. The resulting hyperparameter tuning curve is added to Figure 1. By comparing both curves, it is clear that the choice of hyperparameter $\alpha$ which maximizes accuracy differs in the two settings: the best performance is achieved around $\alpha = 8$ with outliers whereas increasing $\alpha$ is detrimental to performance without outliers. + +This difference can be exploited to perform a variant of a membership inference attack (Shokri et al., 2017): Here, one could infer from the value of the hyperparameter $\alpha$ whether or not the outlier points $x_{o}$ were part of the training set or not. While the example is constructed, this shows how we must be careful when tuning hyperparameters: in corner cases such as the one presented here, it is possible for some information contained in the training data to leak in hyperparameter choices. + +In particular, this implies that the common practice of tuning hyperparameters without differential privacy and then using the hyperparameter values selected to repeat training one last time with differential privacy is not ideal. In Section 3, we will in particular show how training with differential privacy when performing the + +![](images/98cf7e13b9ac484f3ae702a5f10365f1af9e250655869bf5075a2a46434130d8.jpg) +Figure 1: Accuracy of the model as a function of the regularization weight $\alpha$ , with and without outliers. Note how the model performance exhibits a turning point with outliers whereas increasing the value of $\alpha$ is detrimental without outliers. + +different runs necessary to tune the hyperparameter can bound such leakage effectively if one carefully chooses the number of runs hyperparameters are tuned for. + +# 3 OUR POSITIVE RESULTS + +# 3.1 PROBLEM FORMULATION + +We begin by appropriately formalizing the problem of differentially private hyperparameter tuning, following the framework of Liu & Talwar (2019). Suppose we have $m$ randomized base algorithms $M_{1}, M_{2}, \dots, M_{m}: \mathcal{X}^{n} \to \mathcal{Y}$ . These correspond to $m$ possible settings of the hyperparameters. Ideally, we would simply run each of these algorithms once and return the best outcome. For simplicity, we consider a finite set of hyperparameter possibilities; if the hyperparameters of interest are continuous, then we must pick a finite subset to search over (which is in practice sufficient). + +Here we make two simplifying assumptions: First, we assume that there is a total order on the range $\mathcal{V}$ , which ensures that "best" is well-defined. In particular, we are implicitly assuming that the algorithm computes a quality score (e.g., accuracy on a test set) for the trained model it produces; this may require allocating some privacy budget to this evaluation. Second, we are assuming that the output includes both the trained model and the corresponding hyperparameter values (i.e., the output of $M_{j}$ includes the index $j$ ). These assumptions can be made without loss of generality. + +# 3.2 STRAWMAN APPROACH: REPEAT THE BASE ALGORITHM A FIXED NUMBER OF TIMES + +The obvious approach to this problem would be to run each algorithm once and to return the best of the $m$ outcomes. From composition, we know that the privacy cost grows at most linearly with $m$ . It turns out that this is in fact tight. There exists a $(\varepsilon, 0)$ -DP algorithm such that if we repeatedly run it $m$ times and return the best output, the resultant procedure is not $(m\varepsilon - \tau, 0)$ -DP for any $\tau > 0$ . This was observed by Liu & Talwar (2019, Appendix B) and we provide an analysis in Appendix D.1. This negative result also extends to Rényi DP. To avoid this problem, we will run the base algorithms a random number of times. The added uncertainty significantly enhances privacy. However, we must carefully choose this random distribution and analyze it. + +# 3.3 OUR ALGORITHM FOR HYPERPARAMETER TUNING + +To obtain good privacy bounds, we must run the base algorithms a random number of times. We remark that random search rather than grid search is often performed in practice (Bergstra & Bengio, 2012), so this is not a significant change in methodology. Specifically, we pick a total number of runs $K$ from some distribution. Then, for each run $k = 1, 2, \dots, K$ , we pick an index $j_k \in [m]$ uniformly at random and run $M_{j_k}$ . Then, at the end, we return the best of the $K$ outcomes. + +The privacy guarantee of this overall system then depends on the privacy guarantees of each of the mechanisms $M_{j}$ as well as the distribution of the number of runs $K$ . Specifically, we assume that there exists a uniform (Rényi) DP bound for all of the mechanisms $M_{j}$ . Note that DP is "convex" where "convexity" here means that if $M_{1}, M_{2}, \dots, M_{m}$ are each individually DP, then running $M_{j_k}$ for a random $j_{k} \in [m]$ is also DP with the same parameters. + +To simplify notation, we also assume that there is a single mechanism $Q: \mathcal{X}^n \to \mathcal{Y}$ that picks a random index $j \in [m]$ and then runs $M_j$ . In essence, our goal is to "boost" the success probability of $Q$ by repeating it many times. The distribution of the number of runs $K$ must be chosen to both ensure good privacy guarantees and to ensure that the system is likely to pick a good setting of hyperparameters. Also, the overall runtime of the system depends on $K$ and we want the runtime to be efficient and predictable. Our results consider several distributions on the number of repetitions $K$ and the ensuing tradeoff between these three considerations. + +# 3.4 MAIN RESULTS + +There are many possible distributions for the number of repetitions $K$ . In this section, we first consider two – the truncated negative binomial and Poisson distributions – and state our main privacy results for these distributions. Later, in Section 3.5, we state a more general technical lemma which applies to any distribution on the number of repetitions $K$ . + +Definition 1 (Truncated Negative Binomial Distribution). Let $\gamma \in (0,1)$ and $\eta \in (-1,\infty)$ . Define a distribution $\mathcal{D}_{\eta,\gamma}$ on $\mathbb{N} = \{1,2,\dots\}$ as follows. If $\eta \neq 0$ and $K$ is drawn from $\mathcal{D}_{\eta,\gamma}$ , then + +$$ +\forall k \in \mathbb {N} \quad \mathbb {P} [ K = k ] = \frac {(1 - \gamma) ^ {k}}{\gamma^ {- \eta} - 1} \cdot \prod_ {\ell = 0} ^ {k - 1} \left(\frac {\ell + \eta}{\ell + 1}\right) \tag {3} +$$ + +and $\mathbb{E}\left[K\right] = \frac{\eta\cdot(1 - \gamma)}{\gamma\cdot(1 - \gamma^{\eta})}$ . If $K$ is drawn from $\mathcal{D}_{0,\gamma}$ , then + +$$ +\mathbb {P} [ K = k ] = \frac {(1 - \gamma) ^ {k}}{k \cdot \log (1 / \gamma)} \tag {4} +$$ + +and $\mathbb{E}\left[K\right] = \frac{1 / \gamma - 1}{\log(1 / \gamma)}$ + +This is called the "negative binomial distribution," since $\prod_{\ell=0}^{k-1}\left(\frac{\ell+\eta}{\ell+1}\right)=\binom{k+\eta-1}{k}$ if we extend the definition of binomial coefficients to non-integer $\eta$ . The distribution is called "truncated" because $\mathbb{P}[K=0]=0$ , whereas the standard negative binomial distribution includes 0 in its support. The $\eta=0$ case $\mathcal{D}_{0,\gamma}$ is known as the "logarithmic distribution". The $\eta=1$ case $\mathcal{D}_{1,\gamma}$ is simply the geometric distribution. Next, we state our main privacy result for this distribution. + +Theorem 2 (Main Privacy Result - Truncated Negative Binomial). Let $Q: \mathcal{X}^n \to \mathcal{Y}$ be a randomized algorithm satisfying $(\lambda, \varepsilon)$ -RDP and $(\hat{\lambda}, \hat{\varepsilon})$ -RDP for some $\varepsilon, \hat{\varepsilon} \geq 0$ , $\lambda \in (1, \infty)$ , and $\hat{\lambda} \in [1, \infty)$ . Assume $\mathcal{Y}$ is totally ordered. + +Let $\eta \in (-1,\infty)$ and $\gamma \in (0,1)$ . Define an algorithm $A:\mathcal{X}^n\to \mathcal{Y}$ as follows. Draw $K$ from the truncated negative binomial distribution $\mathcal{D}_{\eta ,\gamma}$ (Definition 1). Run $Q(x)$ repeatedly $K$ times. Then $A(x)$ returns the best value from the $K$ runs. + +Then $A$ satisfies $(\lambda, \varepsilon')$ -RDP where + +$$ +\varepsilon^ {\prime} = \varepsilon + (1 + \eta) \cdot \left(1 - \frac {1}{\hat {\lambda}}\right) \hat {\varepsilon} + \frac {(1 + \eta) \cdot \log (1 / \gamma)}{\hat {\lambda}} + \frac {\log \mathbb {E} [ K ]}{\lambda - 1}. \tag {5} +$$ + +![](images/50aec2b498e778e3a70ec7ffa775d61262c4d25ae5efce789faaa0fba686c01a.jpg) +Figure 2: Renyi DP guarantees from Corollary 4 for various expected numbers of repetitions of the logarithmic distribution (i.e., truncated negative binomial with $\eta = 0$ ), compared with base algorithm (0.1-zCDP) and naive composition. + +![](images/d16d82b292a141abc6941e4ae7a25cfcd93d108bf6afcddedee8e83b9d19168a.jpg) +Figure 3: Renyi DP guarantees for repetition using different distributions or naive composition with mean 10, with 0.1-zCDP base algorithm. + +![](images/f5207dcd4a9fdea34caa169a9c9f0ade4ba7cdc92257644117f8f03d968347e2.jpg) +Figure 4: Privacy versus expected number of repetitions using different distributions or naive composition. Rényi DP guarantees are converted to approximate DP - i.e., we plot $\varepsilon$ such that we attain $(\varepsilon, 10^{-6})$ -DP. The base algorithm is 0.1-zCDP. + +Theorem 2 shows a tradeoff between privacy and utility for the distribution $\mathcal{D}_{\eta, \gamma}$ of the number of repetitions. Privacy improves as $\eta$ decreases and $\gamma$ increases. However, this corresponds to fewer repetitions and thus a lower chance of success. We will study this aspect in Section 3.6. + +Theorem 2 assumes two RDP bounds, which makes it slightly hard to interpret. Thus we consider two illustrative special cases: We start with pure DP (a.k.a. pointwise DP) - i.e., $(\varepsilon, \delta)$ -DP with $\delta = 0$ which is equivalent to $(\infty, \varepsilon)$ -RDP. This corresponds to Theorem 2 with $\lambda \to \infty$ and $\hat{\lambda} \to \infty$ . + +Corollary 3 (Theorem 2 for pure DP). Let $Q: \mathcal{X}^n \to \mathcal{Y}$ be a randomized algorithm satisfying $(\varepsilon, 0)$ -DP. Let $\eta \in (-1, \infty)$ and $\gamma \in (0, 1)$ . Define $A: \mathcal{X}^n \to \mathcal{Y}$ as in Theorem 2. Then $A$ satisfies $((2 + \eta)\varepsilon, 0)$ -DP. + +Our result is a generalization of the result of Liu & Talwar (2019) - they show that, if $K$ follows a geometric distribution and $Q$ satisfies $(\varepsilon, 0)$ -DP, then $A$ satisfies $(3\varepsilon, 0)$ -DP. Setting $\eta = 1$ in Corollary 3 recovers their result. If we set $\eta < 1$ , then we obtain an improved privacy bound. + +Another example is if $Q$ satisfies concentrated DP (Dwork & Rothblum, 2016; Bun & Steinke, 2016). This is the type of guarantee that is obtained by adding Gaussian noise to a bounded sensitivity function. In particular, this is the type of guarantee we would obtain from noisy gradient descent.[5] + +Corollary 4 (Theorem 2 for Concentrated DP). Let $Q: \mathcal{X}^n \to \mathcal{Y}$ be a randomized algorithm satisfying $\rho$ -zCDP - i.e., $(\lambda, \rho \cdot \lambda)$ -Rényi DP for all $\lambda > 1$ . Let $\eta \in (-1, \infty)$ and $\gamma \in (0, 1)$ . Define $A: \mathcal{X}^n \to \mathcal{Y}$ and $K \gets \mathcal{D}_{\eta, \gamma}$ as in Theorem 2. Assume $\rho \leq \log(1 / \gamma)$ . Then $A$ satisfies $(\lambda, \varepsilon')$ -Rényi DP for all $\lambda > 1$ with + +$$ +\varepsilon^ {\prime} = \left\{ \begin{array}{c c} 2 \sqrt {\rho \cdot \log \left(\mathbb {E} \left[ K \right]\right)} + 2 (1 + \eta) \sqrt {\rho \log (1 / \gamma)} - \eta \rho & i f \lambda \leq 1 + \sqrt {\frac {1}{\rho} \log \left(\mathbb {E} \left[ K \right]\right)} \\ \rho \cdot (\lambda - 1) + \frac {1}{\lambda - 1} \log \left(\mathbb {E} \left[ K \right]\right) + 2 (1 + \eta) \sqrt {\rho \log (1 / \gamma)} - \eta \rho & i f \lambda > 1 + \sqrt {\frac {1}{\rho} \log \left(\mathbb {E} \left[ K \right]\right)} \end{array} \right. +$$ + +Figure 2 shows what the guarantee of Corollary 4 looks like. Here we start with 0.1-zCDP and perform repetition following the logarithmic distribution $(\eta = 0)$ with varying scales (given by $\gamma$ ) and plot the Renyi DP guarantee attained by outputting the best of the repeated runs. The improvement over naive composition, which instead grows linearly, is clear. We also study other distributions on the number of repetitions, obtained by varying $\eta$ , and Figure 3 gives a comparison. Figure 4 shows what these bounds look like if we convert to approximate $(\varepsilon, \delta)$ -DP with $\delta = 10^{-6}$ . + +Remark 5. Corollary 4 uses the monotonicity property of Rényi divergences: If $\lambda_1 \leq \lambda_2$ , then $\mathrm{D}_{\lambda_1}(P\|Q) \leq \mathrm{D}_{\lambda_2}(P\|Q)$ (Van Erven & Harremos, 2014, Theorem 3). Thus $(\lambda_2, \varepsilon)$ -RDP implies $(\lambda_1, \varepsilon)$ -RDP for any $\lambda_1 \leq \lambda_2$ . In particular, the bound of Theorem 2 yields $\varepsilon' \to \infty$ as $\lambda \to 1$ , so we use monotonicity to bound $\varepsilon'$ for small $\lambda$ . + +**Poisson Distribution.** We next consider the Poisson distribution, which offers a different privacy-utility tradeoff than the truncated negative binomial distribution. The Poisson distribution with mean $\mu \geq 0$ is given by $\mathbb{P}[K = k] = e^{-\mu}\frac{\mu^k}{k!}$ for all $k \geq 0$ . Note that $\mathbb{P}[K = 0] = e^{-\mu} > 0$ here, whereas the truncated negative binomial distribution does not include 0 in its support. We could condition on $K \geq 1$ here too, but we prefer to stick with the standard definition. We remark that (modulo the issue around $\mathbb{P}[K = 0]$ ) the Poisson distribution is closely related to the truncated negative binomial distribution. If we take the limit as $\eta \to \infty$ while the mean remains fixed, then the negative binomial distribution becomes a Poisson distribution. Conversely, the negative binomial distribution can be represented as a convex combination of Poisson distributions or as a compound of Poisson and logarithmic; see Appendix A.2 for more details. + +Theorem 6 (Main Privacy Result - Poisson Distribution). Let $Q: \mathcal{X}^n \to \mathcal{Y}$ be a randomized algorithm satisfying $(\lambda, \varepsilon)$ -RDP and $(\hat{\varepsilon}, \hat{\delta})$ -DP for some $\lambda \in (1, \infty)$ and $\varepsilon, \hat{\varepsilon}, \hat{\delta} \geq 0$ . Assume $\mathcal{Y}$ is totally ordered. Let $\mu > 0$ . + +Define an algorithm $A: \mathcal{X}^n \to \mathcal{Y}$ as follows. Draw $K$ from a Poisson distribution with mean $\mu$ - i.e., $\mathbb{P}[K = k] = e^{-\mu} \cdot \frac{\mu^k}{k!}$ for all $k \geq 0$ . Run $Q(x)$ repeatedly $K$ times. Then $A(x)$ returns the best value from the $K$ runs. If $K = 0$ , $A(x)$ returns some arbitrary output independent from the input $x$ . If $e^{\hat{\varepsilon}} \leq 1 + \frac{1}{\lambda - 1}$ , then $A$ satisfies $(\lambda, \varepsilon')$ -RDP where + +$$ +\varepsilon^ {\prime} = \varepsilon + \mu \cdot \hat {\delta} + \frac {\log \mu}{\lambda - 1}. +$$ + +The assumptions of Theorem 6 are different from Theorem 2: We assume a Rényi DP guarantee and an approximate DP guarantee on $Q$ , rather than two Rényi DP guarantees. We remark that a Rényi DP guarantee can be converted into an approximate DP guarantee $-(\lambda, \varepsilon)$ -RDP implies $(\hat{\varepsilon}, \hat{\delta})$ -DP for all $\hat{\varepsilon} \geq \varepsilon$ and $\hat{\delta} = e^{(\lambda - 1)(\hat{\varepsilon} - \varepsilon)} \cdot \frac{1}{\lambda} \cdot \left(1 - \frac{1}{\lambda}\right)^{\lambda - 1}$ (Mironov, 2017; Canonne et al., 2020). Thus this statement can be directly compared to our other result. We show such a comparison in Figure 3 and Figure 4. The proofs of Theorems 2 and 6 are included in Appendix B.2. + +3.5 GENERIC RÉNYI DP BOUND FOR ANY DISTRIBUTION ON THE NUMBER OF REPETITIONS We now present our main technical lemma, which applies to any distribution on the number of repetitions $K$ . Theorems 2 and 6 are derived from this result. It gives a Rényi DP bound for the repeated algorithm in terms of the Rényi DP of the base algorithm and the probability generating function of the number of repetitions applied to probabilities derived from the base algorithm. + +Lemma 7 (Generic Bound). Fix $\lambda > 1$ . Let $K$ be a random variable supported on $\mathbb{N} \cup \{0\}$ . Let $f:[0,1] \to \mathbb{R}$ be the probability generating function of $K$ -i.e., $f(x) \coloneqq \sum_{k=0}^{\infty} \mathbb{P}[K = k] \cdot x^{k}$ . + +Let $Q$ and $Q'$ be distributions on $\mathcal{Y}$ . Assume $\mathcal{Y}$ is totally ordered. Define a distribution $A$ on $\mathcal{Y}$ as follows. First sample $K$ . Then sample from $Q$ independently $K$ times and output the best of these samples. This output is a sample from $A$ . We define $A'$ analogously with $Q'$ in place of $Q$ . Then + +$$ +\mathrm {D} _ {\lambda} \left(A \| A ^ {\prime}\right) \leq \mathrm {D} _ {\lambda} \left(Q \| Q ^ {\prime}\right) + \frac {1}{\lambda - 1} \log \left(f ^ {\prime} (q) ^ {\lambda} \cdot f ^ {\prime} \left(q ^ {\prime}\right) ^ {1 - \lambda}\right), \tag {6} +$$ + +where applying the same postprocessing to $Q$ and $Q'$ gives probabilities $q$ and $q'$ respectively - i.e., there exists an arbitrary function $g: \mathcal{Y} \to [0,1]$ such that $q = \underset{X \leftarrow Q}{\mathbb{E}}[g(X)]$ and $q' = \underset{X' \leftarrow Q'}{\mathbb{E}}[g(X')]$ . + +The proof of this generic bound is found in Appendix B.1. To interpret the theorem, we should imagine adjacent inputs $x, x' \in \mathcal{X}^n$ , and then the distributions correspond to the algorithms run on these inputs: $A = A(x)$ , $A' = A(x')$ , $Q = Q(x)$ , and $Q' = Q(x')$ . The bounds on Rényi divergence thus correspond to Rényi DP bounds. The derivative of the probability generating function $-f'(x) = \mathbb{E}\left[K \cdot x^{K-1}\right]$ is somewhat mysterious. A first-order intuition is that, if $q = q'$ , then $f'(q)^{\lambda} \cdot f'(q')^{1-\lambda} = f'(q) \leq f'(1) = \mathbb{E}[K]$ and thus the last term in the bound (6) is simply $\frac{\log \mathbb{E}[K]}{\lambda-1}$ . A second-order intuition is that $q \approx q'$ by DP and postprocessing and, if $f'$ is smooth, then $f'(q) \approx f'(q')$ and the first-order intuition holds up to these approximations. Vaguely, $f'$ being smooth corresponds to the distribution of $K$ being spread out (i.e., not a point mass) and not too + +![](images/849acd5515eb97905a7a0f9e04b8175d77d217b0ca8558a1e530840b68ba0cfc.jpg) +Figure 5: Expected quantile of the repeated algorithm $A$ as a function of the final privacy guarantee $(\varepsilon, 10^{-6})$ -DP for various distributions $K$ , where each invocation of the base algorithm $Q$ is 0.1-zCDP. + +![](images/07d1ee421b65981311bce3a51216862a97c3fc0c242d430d6dc309df6570ccf5.jpg) +Figure 6: Final success probability $(\beta)$ of the repeated algorithm $A$ as a function of the final privacy guarantee $(\varepsilon, 10^{-6})$ -DP for various distributions, where each invocation of the base algorithm $Q$ has a $1/100$ probability of success and is $0.1$ -zCDP. + +![](images/bba8d9b77debe5ac96bafc1244732b8b8b155d7eab6957505285155c9e39fc55.jpg) +Figure 7: Accuracy of the CNN model obtained at the end of the hyperparameter search, for the different distributions on the number of repetitions $K$ we considered. We report the mean over 500 trials of the experiment. + +heavy-tailed (i.e. $K$ is small most of the time). The exact quantification of this smoothness depends on the form of the DP guarantee $q \approx q'$ . + +In our work, we primarily compare three distributions on the number of repetitions: a point mass (corresponding to naive repetition), the truncated negative binomial distribution, and the Poisson distribution. A point mass would have a polynomial as the probability generating function - i.e., if $\mathbb{P}[K = k] = 1$ , then $f(x) = \mathbb{E}[x^K] = x^k$ . The probability generating function of the truncated negative binomial distribution (Definition 1) is + +$$ +f (x) = \underset {K \leftarrow \mathcal {D} _ {\eta , \gamma}} {\mathbb {E}} \left[ x ^ {K} \right] = \left\{ \begin{array}{l l} \frac {(1 - (1 - \gamma) x) ^ {- \eta} - 1}{\gamma^ {- \eta} - 1} & \text {i f} \eta \neq 0 \\ \frac {\log (1 - (1 - \gamma) x)}{\log (\gamma)} & \text {i f} \eta = 0 \end{array} . \right. \tag {7} +$$ + +The probability generating function of the Poisson distribution with mean $\mu$ is given by $f(x) = e^{\mu \cdot (x - 1)}$ . We discuss probability generating functions further in Appendix A.2. + +# 3.6 UTILITY AND RUNTIME OF OUR HYPERPARAMETER TUNING ALGORITHM + +Our analytical results thus far, Theorems 2 and 6 and Lemma 7, provide privacy guarantees for our hyperparameter tuning algorithm $A$ when it is used with various distributions on the number of repetitions $K$ . We now turn to the utility that this algorithm provides. The utility of a hyperparameter search is determined by how many times the base algorithm (denoted $Q$ in the theorem statements) is run when we invoke the overall algorithm $(A)$ . The more often $Q$ is run, the more likely we are to observe a good output and $A$ is more likely to return the corresponding hyperparameter values. Note that the number of repetitions $K$ also determines the algorithm's runtime, so these are closely linked. + +How does this distribution on the number of repetitions $K$ map to utility? As a first-order approximation, the utility and runtime are proportional to $\mathbb{E}[K]$ . Hence several of our figures compare the different distributions on $K$ based on a fixed expectation and Figure 4 plots $\mathbb{E}[K]$ on the vertical axis. However, this first-order approximation ignores the fact that some of the distributions we consider are more concentrated than others; even if the expectation is large, there might still be a significant probability that $K$ is small. Indeed, for $\eta \leq 1$ , the mode of the truncated negative binomial distribution is $K = 1$ . We found this to be an obstacle to using the (truncated) negative binomial distribution in practice in our experiments, and discuss this further in Appendix A.2.1. + +We can formulate utility guarantees more precisely by looking at the expected quantile of the output to measure our algorithm's utility. If we run the base algorithm $Q$ once, then the quantile of the output is (by definition) uniform on $[0,1]$ and has mean 0.5. If we repeat the base algorithm a fixed number of times $k$ , then the quantile of the best output follows a $\mathrm{Beta}(k,1)$ distribution, as it is the maximum of $k$ independent uniform random variables. The expectation in this case is $\frac{k}{k + 1} = 1 - \frac{1}{k + 1}$ . If we + +repeat a random number of times $K$ , the expected quantile of the best result is given by + +$$ +\mathbb {E} \left[ \frac {K}{K + 1} \right] = \mathbb {E} \left[ 1 - \frac {1}{K + 1} \right] = \int_ {0} ^ {1} x \cdot f ^ {\prime} (x) \mathrm {d} x = 1 - \int_ {0} ^ {1} f (x) \mathrm {d} x, \tag {8} +$$ + +where $f(x) = \mathbb{E}\left[x^{K}\right]$ is the probability generating function of $K$ ; see Appendix A.2.1 for further details. Figure 5 plots this quantity against privacy. We see that the Poisson distribution performs very well in an intermediate range, while the negative binomial distribution with $\eta = 0.5$ does well if we want a strong utility guarantee. This means that Poisson is best used when little privacy budget is available for the hyperparameter search. Instead, the negative binomial distribution with $\eta = 0.5$ allows us to improve the utility of the solution returned by the hyperparameter search, but this only holds when spending a larger privacy budget (in our example, the budget has to be at least $\varepsilon = 4$ otherwise Poisson is more advantageous). The negative binomial with $\eta = -0.5$ does very poorly. + +From a runtime perspective, the distribution of $K$ should have light tails. All of the distributions we have considered have subexponential tails. However, larger $\eta$ corresponds to better concentration in the negative binomial distribution with the Poisson distribution having the best concentration. + +Experimental Evaluation. To confirm these findings, we apply our algorithm to a real hyperparameter search task. Specifically, we fine-tune the learning rate of a convolutional neural network trained on MNIST. We implement DP-SGD in JAX for an all-convolutional architecture with a stack of 32, 32, 64, 64, 64 feature maps generated by 3x3 kernels. We vary the learning rate between 0.025 and 1 on a logarithmic scale but fix all other hyperparameters: 60 epochs, minibatch size of 256, $\ell_2$ clipping norm of 1, and noise multiplier of 1.1. In Figure 7, we plot the maximal accuracy achieved during the hyperparameter search for the different distributions considered previously as a function of the total privacy budget expended by the search. The experiment is repeated 500 times and the mean result reported. This experiment shows that the Poisson distribution achieves the best privacy-utility tradeoff for this relatively simple hyperparameter search. This agrees with the theoretical analysis we just presented above that shows that the Poisson distribution performs well in the intermediate range of utility, as this is a simple hyperparameter search. + +# 4 CONCLUSION + +Our positive results build on the work of Liu & Talwar (2019) and show that repeatedly running the base algorithm and only returning the best output can incur much lower privacy cost than naive composition would suggest. This however requires that we randomize the number of repetitions, rather than repeating a fixed number of times. We analyze a variety of distributions for the number of repetitions, each of which gives a different privacy/utility tradeoff. + +While our results focused on the privacy implications of tuning hyperparameters with, and without, differential privacy, our findings echo prior observations that tuning details of the model architecture without privacy to then repeat training with DP affords suboptimal utility-privacy tradeoffs (Papernot et al., 2020); in this work, the authors demonstrated that the optimal choice of activation function in a neural network can be different when learning with DP, and that tuning it with DP immediately can improve the model's utility at no changes to the privacy guarantee. We envision that future work will be able to build on our algorithm for private tuning of hyperparameters to facilitate privacy-aware searches for model architectures and training algorithm configurations to effectively learn with them. + +Limitations. We show that hyperparameter tuning is not free from privacy cost. Our theoretical and experimental results show that, in the setting of interest, the privacy parameter may double or even triple after accounting for hyperparameter tuning, which could be prohibitive. In this case, one compromise would be to state both privacy guarantees – that of the base algorithm that does not account for hyperparameter tuning, and that of the overall system that does account for this. The reader may wonder whether our positive results can be improved. In Appendix D, we explore give some intuition for why they cannot (easily) be improved. We also note that our results are only immediately applicable to the hyperparameter tuning algorithm from Section 3.3. Other algorithms, in particular those that adaptively choose hyperparameter candidates will require further analysis. + +Finally, among the distributions on the number of repetitions $K$ that we have analyzed, the distribution that provides the best privacy-utility tradeoffs will depend on the setting. While it is good to have choices, this does leave some work to be done by those using our results. Fortunately, the differences between the distributions seem to be relatively small, so this choice is unlikely to be critical. + +# REPRODUCIBILITY & ETHICS STATEMENTS + +Reproducibility. We give precise theorem statements for our main results and we have provided complete proofs in the Appendix, as well as all the necessary calculations and formulas for plotting our figures. We have also fully specified the setup required to reproduce our experimental results, including hyperparameters. Our algorithm is simple, fully specified and can be easily implemented. + +Ethics. Our work touches on privacy, which is an ethically sensitive topic. If differentially private algorithms – such as ours – are applied to real-world sensitive data, then potential harms to the people whose data is being used must be carefully considered. However, our work is not directly using real-world sensitive data. Our main results are theoretical and our experiments use either synthetic data or MNIST, which is a standard non-private dataset. + +# ACKNOWLEDGMENTS + +The authors would like to thank the reviewers for their detailed feedback and interactive discussion during the review period. We also thank our colleagues Abhradeep Guha Thakurta, Andreas Terzis, Peter Kairouz, and Shuang Song for insightful discussions about differentially private hyperparameter tuning that led to the present project, as well as their comments on early drafts of this document. + +# REFERENCES + +Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308-318, 2016. +Borja Balle, Gilles Barthe, and Marco Gaboardi. Privacy amplification by subsampling: Tight analyses via couplings and divergences. arXiv preprint arXiv:1807.01647, 2018. +Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pp. 464-473. IEEE, 2014. +Raef Bassily, Kobbi Nissim, Adam Smith, Thomas Steinke, Uri Stemmer, and Jonathan Ullman. Algorithmic stability for adaptive data analysis. SIAM Journal on Computing, (0):STOC16-377, 2021. +James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal of machine learning research, 13(2), 2012. +Mark Bun and Thomas Steinke. Concentrated differential privacy: Simplifications, extensions, and lower bounds. In Theory of Cryptography Conference, pp. 635-658. Springer, 2016. +Mark Bun, Jonathan Ullman, and Salil Vadhan. Fingerprinting codes and the price of approximate differential privacy. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pp. 1-10, 2014. +Mark Bun, Cynthia Dwork, Guy N Rothblum, and Thomas Steinke. Composable and versatile privacy via truncated cdp. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pp. 74-86, 2018. +Clément L Canonne, Gautam Kamath, and Thomas Steinke. The discrete gaussian for differential privacy. arXiv preprint arXiv:2004.00010, 2020. +Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel. Extracting training data from large language models. arXiv preprint arXiv:2012.07805, 2020. + +Kamalika Chaudhuri and Staal A Vinterbo. A stability-based validation procedure for differentially private machine learning. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc., 2013. URL https://proceedings.neurips.cc/paper/2013/file/e6d8545daa42d5ced125a4bf747b3688-Paper.pdf. +Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4):211-407, 2014. +Cynthia Dwork and Guy N Rothblum. Concentrated differential privacy. arXiv preprint arXiv:1603.01887, 2016. +Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data, ourselves: Privacy via distributed noise generation. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 486-503. Springer, 2006a. +Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference, pp. 265-284. Springer, 2006b. +Cynthia Dwork, Moni Naor, Omer Reingold, Guy N Rothblum, and Salil Vadhan. On the complexity of differentially private data release: efficient algorithms and hardness results. In Proceedings of the forty-first annual ACM symposium on Theory of computing, pp. 381-390, 2009. +Cynthia Dwork, Guy N Rothblum, and Salil Vadhan. Boosting and differential privacy. In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pp. 51-60. IEEE, 2010. +Konstantina Kourou, Themis P Exarchos, Konstantinos P Exarchos, Michalis V Karamouzis, and Dimitrios I Fotiadis. Machine learning applications in cancer prognosis and prediction. Computational and structural biotechnology journal, 13:8-17, 2015. +Jingcheng Liu and Kunal Talwar. Private selection from private candidates. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pp. 298-309, 2019. URL https://arxiv.org/abs/1811.07971. +Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS'07), pp. 94-103. IEEE, 2007. +Ilya Mironov. Rényi differential privacy. In 2017 IEEE 30th Computer Security Foundations Symposium (CSF), pp. 263-275. IEEE, 2017. +Ilya Mironov, Kunal Talwar, and Li Zhang. R\`enyi differential privacy of the sampled gaussian mechanism. arXiv preprint arXiv:1908.10530, 2019. +Shubhankar Mohapatra, Sajin Sasy, Xi He, Gautam Kamath, and Om Thakkar. The role of adaptive optimizers for honest private hyperparameter selection. arXiv preprint arXiv:2111.04906, 2021. +Nicolas Papernot, Abhradeep Thakurta, Shuang Song, Steve Chien, and Ülfar Erlingsson. Tempered sigmoid activations for deep learning with differential privacy. arXiv preprint arXiv:2007.14191, 2020. +Ryan Rogers and Thomas Steinke. A better privacy analysis of the exponential mechanism. DifferentialPrivacy.org, 07 2021. https://differentialprivacy.org/ exponential-mechanism-bounded-range/. +Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 3-18. IEEE, 2017. +Shuang Song, Kamalika Chaudhuri, and Anand D Sarwate. Stochastic gradient descent with differentially private updates. In 2013 IEEE Global Conference on Signal and Information Processing, pp. 245-248. IEEE, 2013. +Thomas Steinke and Jonathan Ullman. Tight lower bounds for differentially private selection. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pp. 552-563. IEEE, 2017. + +Tim Van Erven and Peter Harremos. Rényi divergence and kullback-leibler divergence. IEEE Transactions on Information Theory, 60(7):3797-3820, 2014. + +Jenna Wiens, Suchi Saria, Mark Sendak, Marzyeh Ghassemi, Vincent X Liu, Finale Doshi-Velez, Kenneth Jung, Katherine Heller, David Kale, Mohammed Saeed, Pilar N. Ossorio, Sonoo Thadaney-Israni, and Anna Goldenberg. Do no harm: a roadmap for responsible machine learning for health care. Nature medicine, 25(9):1337-1340, 2019. + +Yuqing Zhu and Yu-Xiang Wang. Improving sparse vector technique with renyi differential privacy. In Advances in Neural Information Processing Systems 33 preproceedings (NeurIPS 2020), 2020. URL https://papers.nips.cc/paper/2020/ hash/e9bf14a419d77534105016f5ec122d62-Abstract.html. + +# A FURTHER BACKGROUND + +# A.1 DIFFERENTIAL PRIVACY & RÉNYI DP + +For completeness, we provide some basic background on differential privacy and, in particular, Rényi differential privacy. We start with the standard definition of differential privacy: + +Definition 8 (Differential Privacy). A randomized algorithm $M: \mathcal{X}^n \to \mathcal{Y}$ is $(\varepsilon, \delta)$ -differentially private if, for all neighbouring pairs of inputs $x, x' \in \mathcal{X}^n$ and all measurable $S \subset \mathcal{Y}$ , + +$$ +\mathbb {P} [ M (x) \in S ] \leq e ^ {\varepsilon} \cdot \mathbb {P} [ M (x ^ {\prime}) \in S ] + \delta . +$$ + +When $\delta = 0$ , this is referred to as pure (or pointwise) differential privacy and we may abbreviate $(\varepsilon, 0)$ -DP to $\varepsilon$ -DP. When $\delta > 0$ , this is referred to as approximate differential privacy. + +The definition of pure DP was introduced by Dwork et al. (2006b) and approximate DP was introduced by Dwork et al. (2006a). Note that the notion of neighbouring datasets is context-dependent, but this is often glossed over. Our results are general and can be applied regardless of the specifics of what is a neighbouring dataset. (However, we do require symmetry – i.e., if $(x, x')$ are a pair of neighbouring inputs, then so are $(x'x)$ .) Usually two datasets are said to be neighbouring if they differ only by the addition/removal or replacement of the data corresponding to a single individual. Some papers only consider addition or removal of a person's records, rather than replacement. But these are equivalent up to a factor of two. + +In order to define Rényi DP, we first define the Rényi divergences: + +Definition 9 (Rényi Divergences). Let $P$ and $Q$ be probability distributions on a common space $\Omega$ . Assume that $P$ is absolutely continuous with respect to $Q$ – i.e., for all measurable $S \subset \Omega$ , if $Q(S) = 0$ , then $P(S) = 0$ . Let $P(x)$ and $Q(x)$ denote the densities of $P$ and $Q$ respectively. The KL divergence from $P$ to $Q$ is defined as + +$$ +\mathrm {D} _ {1} \left(P \| Q\right) := \underset {X \leftarrow P} {\mathbb {E}} \left[ \log \left(\frac {P (X)}{Q (X)}\right) \right] = \int_ {\Omega} P (x) \log \left(\frac {P (x)}{Q (x)}\right) \mathrm {d} x. +$$ + +The max divergence from $P$ to $Q$ is defined as + +$$ +\mathrm {D} _ {\infty} \left(P \| Q\right) := \sup \left\{\log \left(\frac {P (S)}{Q (S)}\right): P (S) > 0 \right\}. +$$ + +For $\lambda \in (1,\infty)$ , the Rényi divergence from $P$ to $Q$ of order $\lambda$ is defined as + +$$ +\begin{array}{l} \mathrm {D} _ {\lambda} \left(P \| Q\right) := \frac {1}{\lambda - 1} \log \left(\underset {X \leftarrow P} {\mathbb {E}} \left[ \left(\frac {P (X)}{Q (X)}\right) ^ {\lambda - 1} \right]\right) \\ = \frac {1}{\lambda - 1} \log \left(\underset {X \leftarrow Q} {\mathbb {E}} \left[ \left(\frac {P (X)}{Q (X)}\right) ^ {\lambda} \right]\right) \\ \_ = \frac {1}{\lambda - 1} \log \left(\int_ {\Omega} P (x) ^ {\lambda} Q (x) ^ {1 - \lambda} \mathrm {d} x\right). \\ \end{array} +$$ + +In general, we can only define the ratio $P(x) / Q(x)$ to be the Radon-Nikodym derivative of $P$ with respect to $Q$ . To talk about $P(x)$ and $Q(x)$ separately we must assume some base measure with respect to which these are defined. In most cases the base measure is either the counting measure in the case of discrete distributions or the Lesbesgue measure in the case of continuous distributions. + +We state some basic properties of Rényi divergences; for further information see, e.g., Van Erven & Harremos (2014). + +Lemma 10. Let $P, Q, P',$ and $Q'$ be probability distributions. Let $\lambda \in [1,\infty]$ . The following hold. + +Non-negativity: $\mathrm{D}_{\lambda}(P\| Q)\geq 0$ +- Monotonicity & Continuity: $\mathrm{D}_{\lambda}(P\| Q)$ is a continuous and non-decreasing function of $\lambda$ . +- Data processing inequality (a.k.a. Postprocessing): Let $f(P)$ denote the distribution obtained by applying some (possibly randomized) function to a sample from $P$ and let $f(Q)$ denote the distribution obtained by applying the same function to a sample from $Q$ . Then $\mathrm{D}_{\lambda}(f(P)\| f(Q)) \leq \mathrm{D}_{\lambda}(P\| Q)$ . +- Finite case suffices: We have $\mathrm{D}_{\lambda}(P\| Q) = \sup_f\mathrm{D}_{\lambda}(f(P)\| f(Q))$ even when $f$ is restricted to functions with a finite range. +- Chain rule (a.k.a. Composition): $\mathrm{D}_{\lambda}(P\times P^{\prime}\| Q\times Q^{\prime}) = \mathrm{D}_{\lambda}(P\| Q) + \mathrm{D}_{\lambda}(P^{\prime}\| Q^{\prime})$ where $P\times P^{\prime}$ and $Q\times Q^{\prime}$ denote the product distributions of the individual distributions. +- Convexity: The function $(P, Q) \mapsto e^{(\lambda - 1)\mathrm{D}_{\lambda}(P\|Q)}$ is convex for all $\lambda \in (1,\infty)$ . The function $(P, Q) \mapsto \mathrm{D}_{\lambda}(P\|Q)$ convex if and only if $\lambda = 1$ . + +Now we can state the definition of Rényi DP (RDP), which is due to Mironov (2017). + +Definition 11 (Rényi Differential Privacy). A randomized algorithm $M: \mathcal{X}^n \to \mathcal{Y}$ is $(\lambda, \varepsilon)$ -Rényi differentially private if, for all neighbouring pairs of inputs $x, x' \in \mathcal{X}^n$ , $\mathrm{D}_{\lambda}(M(x) \| M(x')) \leq \varepsilon$ . + +A closely related definition is that of zero-concentrated differential privacy (Bun & Steinke, 2016) (which is based on an earlier definition (Dwork & Rothblum, 2016) of concentrated differential privacy that does not refer to Rényi divergences). + +Definition 12 (Concentrated Differential Privacy). A randomized algorithm $M: \mathcal{X}^n \to \mathcal{Y}$ is $\rho$ -zCDP if, for all neighbouring pairs of inputs $x, x' \in \mathcal{X}^n$ and all $\lambda \in (1, \infty)$ , $\mathrm{D}_{\lambda}(M(x) \| M(x')) \leq \rho \cdot \lambda$ . + +Usually, we consider a family of $(\lambda, \varepsilon(\lambda))$ -RDP guarantees, where $\varepsilon(\lambda)$ is a function, rather than a single function. Concentrated DP is one example of such a family, where the function is linear, and this captures the behaviour of many natural algorithms. In particular, adding Gaussian noise to a bounded sensitivity function: If $f: \mathcal{X}^n \to \mathbb{R}^d$ has sensitivity $\Delta$ - i.e., $\| f(x) - f(x')\|_2 \leq \Delta$ for all neighbouring $x, x' -$ and $M: \mathcal{X}^n \to \mathbb{R}^d$ is the algorithm that returns a sample from $\mathcal{N}(f(x), \sigma^2 I)$ , then $M$ satisfies $\frac{\Delta^2}{2\sigma^2}$ -zCDP. + +We can convert from pure DP to concentrated or Rényi DP as follows (Bun & Steinke, 2016). + +Lemma 13. If $M$ satisfies $(\varepsilon, 0)$ -differential privacy, then $M$ satisfies $\frac{1}{2}\varepsilon^2 - zCDP - i.e., (\lambda, \frac{1}{2}\varepsilon^2\lambda) - RDP$ for all $\lambda \in (1,\infty)$ . + +Conversely, we can convert from concentrated or Rényi DP to approximate DP as follows (Canonne et al., 2020). + +Lemma 14. If $M$ satisfies $(\lambda, \hat{\varepsilon})$ -RDP, then $M$ satisfies $(\varepsilon, \delta)$ -DP where $\varepsilon \geq 0$ is arbitrary and + +$$ +\delta = \frac {\exp ((\lambda - 1) (\hat {\varepsilon} - \varepsilon))}{\lambda} \cdot \left(1 - \frac {1}{\lambda}\right) ^ {\lambda - 1}. +$$ + +# A.2 PROBABILITY GENERATING FUNCTIONS + +Let $K$ be a random variable supported on $\mathbb{N} \cup \{0\}$ . The probability generating function (PGF) of $K$ is defined by + +$$ +f (x) = \mathbb {E} \left[ x ^ {K} \right] = \sum_ {k = 0} ^ {\infty} \mathbb {P} [ K = k ] \cdot x ^ {k}. +$$ + +The PGF $f(x)$ is always defined for $x \in [0,1]$ , but may or may not be defined for $x > 1$ . The PGF characterizes $K$ . In particular, we can recover the probability mass function from the derivatives of + +the PGF (hence the name): + +$$ +\mathbb {P} [ K = k ] = \frac {f ^ {(k)} (0)}{k !}, +$$ + +where $f^{(k)}(x)$ denotes the $k^{\mathrm{th}}$ derivative of $f(x)$ and, in particular, $\mathbb{P}[K = 0] = f(0)$ . We remark that is often easiest to specify the PGF and derive the probability distribution from it, rather than vice versa; indeed, we arrived at the truncated negative binomial distribution by starting with the PGF that we want and then differentiating. + +We can also easily recover the moments of $K$ from the PGF: We have $f^{(k)}(x) = \sum_{\ell = k}^{\infty}\mathbb{P}[K = \ell ]\cdot x^{\ell -k}\cdot \ell \cdot (\ell -1)\cdot (\ell -2)\dots (\ell -k + 1)$ . In particular, $f(1) = \mathbb{E}[1] = 1$ and $f^{\prime}(1) = \mathbb{E}[K]$ and $f''(1) = \mathbb{E}[K(K - 1)]$ . Note that the PGF is a rescaling of the moment generating function (MGF) $g(t)\coloneqq \mathbb{E}\left[e^{tK}\right] = f(e^{t})$ . + +The PGF can be related to the MGF in another way: Suppose $\Lambda$ is a random variable on $[0, \infty)$ . Now suppose we draw $K \leftarrow \operatorname{Poisson}(\Lambda)$ . Then the PGF of $K$ is the MGF of $\Lambda$ -i.e., $\mathbb{E}\left[x^{K}\right] = \mathbb{E}_{\Lambda}\left[\mathbb{E}_{K \leftarrow \operatorname{Poisson}(\Lambda)}\left[x^{K}\right]\right] = \mathbb{E}_{\Lambda}\left[e^{\Lambda \cdot (x - 1)}\right] = g(x - 1)$ , where $g$ is the MGF of $\Lambda$ . In particular, if $\Lambda$ is drawn from a Gamma distribution, then this would yield $K$ from a negative binomial distribution which has a PGF of the form $f_{\mathrm{NB}}(x) = \left(\frac{1 - (1 - \gamma)x}{\gamma}\right)^{-\eta}$ . Note that our results work with a truncated negative binomial distribution, which is a negative binomial conditioned on $K \neq 0$ . This corresponds to an affine rescaling of the PGF, namely $f_{\mathrm{TNB}}(x) = \frac{f_{\mathrm{NB}}(x) - f_{\mathrm{NB}}(0)}{f_{\mathrm{NB}}(1) - f_{\mathrm{NB}}(0)}$ . + +We can also obtain a negative binomial distribution as a compound of a Poisson distribution and a logarithmic distribution. That is, if we draw $T$ from a Poisson distribution and draw $K_{1}, K_{2}, \dots, K_{T}$ independently from a logarithmic distribution, then $K = \sum_{t=1}^{T} K_{t}$ follows a negative binomial distribution. The PGF of the logarithmic distribution is given by $f_{K_{t}}(x) = \mathbb{E}\left[x^{K_{t}}\right] = \frac{\log(1 - (1 - \gamma)x)}{\log(\gamma)}$ and the PGF of Poisson is given by $f_{T}(x) = \mathbb{E}\left[x^{T}\right] = e^{\mu \cdot (x - 1)}$ . Hence + +$$ +f _ {K} (x) = \mathbb {E} \left[ x ^ {K} \right] = \underset {T} {\mathbb {E}} \left[ \prod_ {t = 1} ^ {T} \underset {K _ {t}} {\mathbb {E}} \left[ x ^ {K _ {t}} \right] \right] = \underset {T} {\mathbb {E}} \left[ f _ {K _ {t}} (x) ^ {T} \right] = f _ {T} (f _ {K _ {t}} (x)) = \exp \left(\mu \cdot \left(\frac {\log (1 - (1 - \gamma) x)}{\log (\gamma)} - 1\right)\right), +$$ + +which is equivalent to $f_{\mathrm{NB}}(x) = \left(\frac{1 - (1 - \gamma)x}{\gamma}\right)^{-\eta}$ with $\eta = \frac{\mu}{\log(1 / \gamma)}$ . + +Finally we remark that we can also use the PGF to show convergence in probability. In particular, + +$$ +\lim _ {\eta \to \infty , \gamma = \frac {\eta}{\eta + \mu}} f _ {\mathrm {N B}} (x) = \lim _ {\eta \to \infty , \gamma = \frac {\eta}{\eta + \mu}} \left(\frac {1 - (1 - \gamma) x}{\gamma}\right) ^ {- \eta} = \lim _ {\eta \to \infty , \gamma = \frac {\eta}{\eta + \mu}} \left(1 - \frac {\mu}{\eta} (x - 1)\right) ^ {- \eta} = e ^ {\mu (x - 1)}. +$$ + +That is, if we take the limit of the negative binomial distribution as $\eta \to \infty$ but the mean $\mu = \eta \frac{1 - \gamma}{\gamma}$ remains fixed, then we obtain a Poisson distribution. If we take $\eta \to 0$ , then $f_{\mathrm{NB}}(x)\to 1$ , which is to say that the negative binomial distribution converges to a point mass at 0 as $\eta \rightarrow 0$ . However, the truncated negative binomial distribution converges to a logarithmic distribution as $\eta \to 0$ . + +# A.2.1 PROBABILITY GENERATING FUNCTIONS AND UTILITY + +Recall that in Section 3.6, we analyzed the expected utility and runtime of different distributions on the number of repetitions $K$ . Given our discussion of probability generating functions for these distributions, we can offer an alternative perspective on the expected utility and runtime. + +Suppose each invocation of $Q$ has a probability $1 / m$ of producing a "good" output. This would be the case if we are considering $m$ hyperparameter settings and only one is good—where here we consider the outcome to be binary (good or bad) for simplicity and what is a good or bad is determined only by the total order on the range $\mathcal{V}$ and some threshold on the quality score (e.g., accuracy). Then $A$ has a probability + +$$ +\beta := 1 - \mathbb {P} [ A (x) \in \mathbf {B a d} ] = 1 - \underset {K} {\mathbb {E}} \left[ \mathbb {P} [ Q (x) \in \mathbf {B a d} ] ^ {K} \right] = 1 - \mathbb {E} \left[ (1 - 1 / m) ^ {K} \right] = 1 - f (1 - 1 / m) +$$ + +of outputting a good output, where $f(x) = \mathbb{E}\left[x^{K}\right]$ is the probability generating function of the distribution. If we make the first-order approximation $f(1 - 1 / m)\approx f(1) - f'(1)\cdot 1 / m = 1 - \mathbb{E}[K] / m$ , then we have $\beta \approx \mathbb{E}[K] / m$ . In other words, for small values of $1 / m$ , the probability of success is amplified by a multiplicative factor of $\mathbb{E}[K]$ . + +However, the above first-order approximation only holds for large $m$ and, hence, small overall success probabilities $\beta$ . In practice, we want $\beta \approx 1$ . The different distributions (Poisson and truncated negative binomial with different values of $\eta$ ) have very different behaviours even with the same expectation. In the regime where we want the overall success probability to be high (i.e., $\beta \approx 1$ ), smaller $\eta$ performs worse, because the distribution is more heavy-tailed. The best performing distribution is the Poisson distribution, which is almost as concentrated as naive repetition. Figure 6 shows the success probability $\beta$ as a function of the final $(\varepsilon, 10^{-6})$ -DP guarantee. This demonstrates that there is a tradeoff between distributions. + +More generally, we can relate the PGF of $K$ to the expected utility of our repeated algorithm. Let $X \in \mathbb{R}$ be random variable corresponding to the utility of one run of the base algorithm $Q$ . E.g. $X$ could represent the accuracy, loss, AUC/AUROC, or simply the quantile of output. Now let $Y \in \mathbb{R}$ be the utility of our repeated algorithm $A$ which runs the base algorithm $Q$ repeatedly $K$ times for a random $K$ . That is, $Y = \max \{X_1, \dots, X_K\}$ where $X_1, X_2, \dots$ are independent copies of $X$ . Let $\operatorname{cdf}_X(x) = \mathbb{P}[X \leq x]$ and + +$$ +\operatorname {c d f} _ {Y} (x) = \mathbb {P} [ Y \leq x ] = \underset {K} {\mathbb {E}} \left[ \underset {X} {\mathbb {P}} [ X \leq x ] ^ {K} \right] = f (\operatorname {c d f} _ {X} (x)), +$$ + +where $f(x) = \mathbb{E}\left[x^{K}\right]$ is the PGF of the number of repetitions $K$ . Assuming for the moment that $X$ is a continuous random variable, we can derive the probability density function of $Y$ from the cumulative distribution function: + +$$ +\operatorname {p d f} _ {Y} (x) = \frac {\mathrm {d}}{\mathrm {d} x} \operatorname {c d f} _ {X} (x) = \frac {\mathrm {d}}{\mathrm {d} x} f (\operatorname {c d f} _ {X} (x)) = f ^ {\prime} (\operatorname {c d f} _ {X} (x)) \cdot \operatorname {p d f} _ {X} (x). +$$ + +This allows us to compute the expected utility: + +$$ +\mathbb {E} \left[ Y \right] = \int_ {- \infty} ^ {\infty} x \cdot \operatorname {p d f} _ {Y} (x) \mathrm {d} x = \int_ {- \infty} ^ {\infty} x \cdot f ^ {\prime} (\operatorname {c d f} _ {X} (x)) \cdot \operatorname {p d f} _ {X} (x) \mathrm {d} x = \mathbb {E} \left[ X \cdot f ^ {\prime} (\operatorname {c d f} _ {X} (X)) \right]. +$$ + +In particular, we can compute the expected quantile (8) in which case $X$ is uniform on $[0,1]$ and, hence, $\operatorname{cdf}_X(x) = x$ and $\operatorname{pdf}_X(x) = 1$ for $x \in [0,1]$ . Integration by parts gives + +$$ +\mathbb {E} \left[ Y \right] = \int_ {0} ^ {1} x \cdot f ^ {\prime} (x) \mathrm {d} x = \int_ {0} ^ {1} \left(\frac {\mathrm {d}}{\mathrm {d} x} x f (x)\right) - f (x) \mathrm {d} x = 1 f (1) - 0 f (0) - \int_ {0} ^ {1} f (x) \mathrm {d} x = 1 - \int_ {0} ^ {1} f (x) \mathrm {d} x. +$$ + +Note that + +$$ +\int_ {0} ^ {1} f (x) \mathrm {d} x = \int_ {0} ^ {1} \underset {K} {\mathbb {E}} \left[ x ^ {K} \right] \mathrm {d} x = \underset {K} {\mathbb {E}} \left[ \int_ {0} ^ {1} x ^ {K} \mathrm {d} x \right] = \underset {K} {\mathbb {E}} \left[ \frac {1}{K + 1} \right]. +$$ + +Finally, we also want to ensure that the runtime of our hyperparameter tuning algorithm is well-behaved. In particular, we wish to avoid heavy-tailed runtimes. We can obtain tail bounds on the number of repetitions $K$ from the PGF or MGF too: For all $t > 0$ , we have + +$$ +\mathbb {P} \left[ K \geq k \right] = \mathbb {P} \left[ e ^ {t \cdot (K - k)} \geq 1 \right] \leq \mathbb {E} \left[ e ^ {t \cdot (K - k)} \right] = f (e ^ {t}) \cdot e ^ {- t \cdot k}. +$$ + +Thus, if the PGF $f(x) = \mathbb{E}\left[x^{K}\right]$ is finite for some $x = e^{t} > 1$ , then we obtain a subexponential tail bound on $K$ . + +# B PROOFS FROM SECTION 3 + +# B.1 PROOF OF GENERIC BOUND + +Proof of Lemma 7. We assume that $\mathcal{V}$ is a finite set and that $\mathbb{P}[K = 0] = 0$ ; this is, essentially, without loss of generality. Denote $Q(\leq y)\coloneqq \sum_{y^{\prime}\in \mathcal{V}}\mathbb{I}[y^{\prime}\leq y]\cdot Q(y^{\prime})$ and similarly for $Q(< y)$ + +and analogously for $Q^{\prime}$ in place of $Q$ . For each $y \in \mathcal{V}$ , we have + +$$ +\begin{array}{l} A (y) = \sum_ {k = 1} ^ {\infty} \mathbb {P} [ K = k ] \cdot \left(Q (\leq y) ^ {k} - Q (< y) ^ {k}\right) \\ = f (Q (\leq y)) - f (Q (< y)) \\ = \int_ {Q (< y)} ^ {Q (\leq y)} f ^ {\prime} (x) \mathrm {d} x \\ = Q(y)\cdot \underset {X\leftarrow [Q(< y),Q(\leq y)]}{\mathbb{E}}[f^{\prime}(X)] \\ \end{array} +$$ + +and, likewise, $A^\prime (y) = Q^\prime (y)\cdot \underset {X'\leftarrow [Q'(< y),Q'(\leq y)]}{\mathbb{E}}[f'(X')]$ . Thus + +$$ +\begin{array}{l} e ^ {(\lambda - 1) \mathrm {D} _ {\lambda} (A \| A ^ {\prime})} = \sum_ {y \in \mathcal {Y}} A (y) ^ {\lambda} \cdot A ^ {\prime} (y) ^ {1 - \lambda} \\ = \sum_ {y \in \mathcal {Y}} Q (y) ^ {\lambda} \cdot Q ^ {\prime} (y) ^ {1 - \lambda} \cdot \underset {X \leftarrow [ Q (< y), Q (\leq y) ]} {\mathbb {E}} \left[ f ^ {\prime} (X) \right] ^ {\lambda} \cdot \underset {X ^ {\prime} \leftarrow [ Q ^ {\prime} (< y), Q ^ {\prime} (\leq y) ]} {\mathbb {E}} \left[ f ^ {\prime} \left(X ^ {\prime}\right) \right] ^ {1 - \lambda} \\ \leq \sum_{y\in \mathcal{Y}}Q(y)^{\lambda}\cdot Q^{\prime}(y)^{1 - \lambda}\cdot \underset { \begin{array}{c}X\leftarrow [Q(< y),Q(\leq y)]\\ X^{\prime}\leftarrow [Q^{\prime}(< y),Q^{\prime}(\leq y)] \end{array} }{\mathbb{E}}\left[f^{\prime}(X)^{\lambda}\cdot f^{\prime}(X^{\prime})^{1 - \lambda}\right] \\ \leq e^{(\lambda -1)\mathrm{D}_{\lambda}\big(Q\| Q^{\prime}\big)}\cdot \max_{y\in \mathcal{Y}}\underset { \begin{array}{c}X\leftarrow [Q(< y),Q(\leq y)]\\ X^{\prime}\leftarrow [Q^{\prime}(< y),Q^{\prime}(\leq y)] \end{array} }{\mathbb{E}}\left[f^{\prime}(X)^{\lambda}\cdot f^{\prime}(X^{\prime})^{1 - \lambda}\right]. \\ \end{array} +$$ + +The second inequality follows from Hölder's inequality. The first inequality follows from the fact that, for any $\lambda \in \mathbb{R}$ , the function $h: (0, \infty)^2 \to (0, \infty)$ given by $h(u, v) = u^{\lambda} \cdot v^{1 - \lambda}$ is convex and, hence, $\mathbb{E}[U]^{\lambda} \mathbb{E}[V]^{1 - \lambda} = h(\mathbb{E}[(U, V)]) \leq \mathbb{E}[h(U, V)] = \mathbb{E}[U^{\lambda} \cdot V^{1 - \lambda}]$ for any pair of positive random variables $(U, V)$ . Note that we require $X$ to be uniform on $[Q(< y), Q(\leq y)]$ and $X'$ to be uniform on $[Q'(< y), Q'(\leq y)]$ , but their joint distribution can be arbitrary. We will couple them so that $\frac{X - Q(< y)}{Q(y)} = \frac{X' - Q'(< y)}{Q'(y)}$ . In particular, this implies that, for each $y \in \mathcal{Y}$ , there exists some $t \in [0, 1]$ such that + +$$ +\mathop{\mathbb{E}}_{\substack{X\leftarrow [Q(< y),Q(\leq y)]\\ X^{\prime}\leftarrow [Q^{\prime}(< y),Q^{\prime}(\leq y)]}}\left[f^{\prime}(X)^{\lambda}\cdot f^{\prime}(X^{\prime})^{1 - \lambda}\right]\leq f^{\prime}(Q(< y) + t\cdot Q(y))^{\lambda}\cdot f^{\prime}(Q^{\prime}(< y) + t\cdot Q^{\prime}(y))^{1 - \lambda}. +$$ + +Hence + +$$ +\mathrm{D}_{\lambda}\left(A\| A^{\prime}\right)\leq \mathrm{D}_{\lambda}\left(Q\| Q^{\prime}\right) + \frac{1}{\lambda - 1}\log \left(\max_{\substack{y\in \mathcal{Y}\\ t\in [0,1)}}f^{\prime}(Q(< y) + t\cdot Q(y))^{\lambda}\cdot f^{\prime}(Q^{\prime}(< y) + t\cdot Q^{\prime}(y))^{1 - \lambda}\right). +$$ + +To prove the result, we simply fix $y_* \in \mathcal{V}$ and $t_* \in [0,1]$ achieving the maximum above and define + +$$ +g (y) := \left\{ \begin{array}{l l} 1 & \text {i f} y < y _ {*} \\ t _ {*} & \text {i f} y = y _ {*} \\ 0 & \text {i f} y > y _ {*} \end{array} \right.. +$$ + +![](images/95b74bf39c2c7d59e3539cd8399344f646292854b921e2a1000dd95b3ae7ff0d.jpg) + +# B.2 PROOFS OF DISTRIBUTION-SPECIFIC BOUNDS + +# Truncated Negative Binomial Distribution + +Proof of Theorem 2. The probability generating function of the truncated negative binomial distribution is + +$$ +f (x) = \underset {K \leftarrow \mathcal {D} _ {\eta , \gamma}} {\mathbb {E}} \left[ x ^ {K} \right] = \left\{ \begin{array}{l l} \frac {(1 - (1 - \gamma) x) ^ {- \eta} - 1}{\gamma^ {- \eta} - 1} & \text {i f} \eta \neq 0 \\ \frac {\log (1 - (1 - \gamma) x)}{\log (\gamma)} & \text {i f} \eta = 0 \end{array} \right.. +$$ + +Thus + +$$ +\begin{array}{l} f ^ {\prime} (x) = (1 - (1 - \gamma) x) ^ {- \eta - 1} \cdot \left\{ \begin{array}{l l} \frac {\eta \cdot (1 - \gamma)}{\gamma^ {- \eta - 1}} & \text {i f} \eta \neq 0 \\ \frac {1 - \gamma}{\log (1 / \gamma)} & \text {i f} \eta = 0 \end{array} \right. \\ = (1 - (1 - \gamma) x) ^ {- \eta - 1} \cdot \gamma^ {\eta + 1} \cdot \mathbb {E} [ K ]. \\ \end{array} +$$ + +Now we delve into the privacy analysis: Let $Q = Q(x)$ and $Q' = Q(x')$ denote the output distributions of $Q$ on two neighbouring inputs. Similarly, let $A = A(x)$ and $A' = A(x')$ be the corresponding pair of output distributions of the repeated algorithm. By Lemma 7, for appropriate values $q, q' \in [0,1]$ and for all $\lambda > 1$ and all $\hat{\lambda} > 1$ , we have + +$$ +\begin{array}{l} \mathrm {D} _ {\lambda} \left(A \| A ^ {\prime}\right) \\ \leq \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime}) + \frac {1}{\lambda - 1} \log \left(f ^ {\prime} (q) ^ {\lambda} \cdot f ^ {\prime} (q ^ {\prime}) ^ {1 - \lambda}\right) \\ = \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime}) + \frac {1}{\lambda - 1} \log \left(\gamma^ {\eta + 1} \cdot \mathbb {E} [ K ] \cdot (1 - (1 - \gamma) q) ^ {- \lambda (\eta + 1)} \cdot (1 - (1 - \gamma) q ^ {\prime}) ^ {- (1 - \lambda) (\eta + 1)}\right) \\ \begin{array}{c} = \mathrm {D} _ {\lambda} \left(Q \| Q ^ {\prime}\right) + \frac {1}{\lambda - 1} \log \left(\gamma^ {\eta + 1} \cdot \mathbb {E} \left[ K \right] \cdot \left(\left(\gamma + (1 - \gamma) (1 - q)\right) ^ {1 - \hat {\lambda}} \cdot \left(\gamma + (1 - \gamma) (1 - q ^ {\prime})\right) ^ {\hat {\lambda}}\right) ^ {\nu} \cdot (\gamma + (1 - \gamma) (1 - q)) ^ {u}\right) \\ (\hat {\lambda} \nu = (\lambda - 1) (1 + \eta) \text {a n d} (1 - \hat {\lambda}) \nu + u = - \lambda (\eta + 1)) \end{array} \\ \leq \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime}) + \frac {1}{\lambda - 1} \log \left(\gamma^ {\eta + 1} \cdot \mathbb {E} [ K ] \cdot \left(\gamma + (1 - \gamma) \cdot e ^ {(\hat {\lambda} - 1) \mathrm {D} _ {\hat {\lambda}} (Q ^ {\prime} \| Q)}\right) ^ {\nu} \cdot (\gamma + (1 - \gamma) (1 - q)) ^ {u}\right) \\ \end{array} +$$ + +$(1 - q$ and $1 - q'$ are postprocessings of $Q$ and $Q'$ respectively and $e^{(\bar{\lambda} - 1)\mathrm{D}_{\hat{\lambda}}(\cdot ||\cdot)}$ is convex and $\nu \geq 0$ ) + +$$ +\begin{array}{l} \leq \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime}) + \frac {1}{\lambda - 1} \log \left(\gamma^ {\eta + 1} \cdot \mathbb {E} [ K ] \cdot \left(\gamma + (1 - \gamma) \cdot e ^ {(\hat {\lambda} - 1) \mathrm {D} _ {\hat {\lambda}} (Q ^ {\prime} \| Q)}\right) ^ {\nu} \cdot \gamma^ {u}\right) \\ (\gamma \leq \gamma + (1 - \gamma) (1 - q) \text {a n d} u \leq 0) \\ \end{array} +$$ + +$$ +\begin{array}{l} = \mathrm {D} _ {\lambda} \left(Q \| Q ^ {\prime}\right) + \frac {\nu}{\lambda - 1} \log \left(\gamma + (1 - \gamma) \cdot e ^ {(\hat {\lambda} - 1) \mathrm {D} _ {\hat {\lambda}} \left(Q ^ {\prime} \| Q\right)}\right) + \frac {1}{\lambda - 1} \log \left(\gamma^ {\eta + 1} \cdot \mathbb {E} [ K ] \cdot \gamma^ {u}\right) \\ = \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime}) + \frac {\nu}{\lambda - 1} \left((\hat {\lambda} - 1) \mathrm {D} _ {\hat {\lambda}} (Q ^ {\prime} \| Q) + \log \left(1 - \gamma \cdot \left(1 - e ^ {- (\hat {\lambda} - 1) \mathrm {D} _ {\hat {\lambda}} (Q ^ {\prime} \| Q)}\right)\right)\right) \\ + \frac {1}{\lambda - 1} \log (\gamma^ {u + \eta + 1} \cdot \mathbb {E} [ K ]) \\ = \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime}) + (1 + \eta) \left(1 - \frac {1}{\hat {\lambda}}\right) \mathrm {D} _ {\hat {\lambda}} \left(Q ^ {\prime} \| Q\right) + \frac {1 + \eta}{\hat {\lambda}} \log \left(1 - \gamma \cdot \left(1 - e ^ {- (\hat {\lambda} - 1) \mathrm {D} _ {\hat {\lambda}} \left(Q ^ {\prime} \| Q\right)}\right)\right) \\ + \frac {\log (\mathbb {E} [ K ])}{\lambda - 1} + \frac {1 + \eta}{\hat {\lambda}} \log (1 / \gamma) \quad (\nu = \frac {(\lambda - 1) (1 + \eta)}{\hat {\lambda}} \text {a n d} u = - (1 + \eta) \left(\frac {\lambda - 1}{\hat {\lambda}} + 1\right)) \\ = \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime}) + (1 + \eta) \left(1 - \frac {1}{\hat {\lambda}}\right) \mathrm {D} _ {\hat {\lambda}} (Q ^ {\prime} \| Q) + \frac {1 + \eta}{\hat {\lambda}} \log \left(\frac {1}{\gamma} - 1 + e ^ {- (\hat {\lambda} - 1) \mathrm {D} _ {\hat {\lambda}} (Q ^ {\prime} \| Q)}\right) + \frac {\log (\mathbb {E} [ K ])}{\lambda - 1} \\ \leq \mathrm {D} _ {\lambda} \left(Q \| Q ^ {\prime}\right) + (1 + \eta) \left(1 - \frac {1}{\hat {\lambda}}\right) \mathrm {D} _ {\hat {\lambda}} \left(Q ^ {\prime} \| Q\right) + \frac {1 + \eta}{\hat {\lambda}} \log \left(\frac {1}{\gamma}\right) + \frac {\log \left(\mathbb {E} [ K ]\right)}{\lambda - 1}. \\ \end{array} +$$ + +![](images/45a4198374b1eac5aa7d01e132f4f1275a74bfa28dd2e2cb8d7a601eb291350b.jpg) + +Proof of Corollary 4. We assume that $Q:\mathcal{X}^n\to \mathcal{Y}$ is a randomized algorithm satisfying $\rho$ -zCDP - i.e., $(\lambda ,\rho \cdot \lambda)$ -Renyi DP for all $\lambda >1$ . Substituting this guarantee into Theorem 2 (i.e., setting $\varepsilon = \rho \cdot \lambda$ and $\hat{\varepsilon} = \rho \cdot \hat{\lambda})$ gives that the repeated algorithm $A$ satisfies $(\lambda ,\varepsilon^{\prime})$ -RDP for + +$$ +\varepsilon^ {\prime} \leq \rho \cdot \lambda + (1 + \eta) \cdot \left(1 - \frac {1}{\hat {\lambda}}\right) \rho \cdot \hat {\lambda} + \frac {(1 + \eta) \cdot \log (1 / \gamma)}{\hat {\lambda}} + \frac {\log \mathbb {E} [ K ]}{\lambda - 1}. +$$ + +This holds for all $\lambda \in (1,\infty)$ and all $\hat{\lambda}\in [1,\infty)$ . + +We set $\hat{\lambda} = \sqrt{\log(1 / \gamma) / \rho}$ to minimize this expression. Note that we assume $\rho \leq \log (1 / \gamma)$ and hence this is a valid setting of $\hat{\lambda} \geq 1$ . This reduces the expression to + +$$ +\varepsilon^ {\prime} \leq \rho \cdot \lambda - (1 + \eta) \cdot \rho + 2 (1 + \eta) \cdot \sqrt {\rho \cdot \log (1 / \gamma)} + \frac {\log \mathbb {E} [ K ]}{\lambda - 1}. +$$ + +This bound is minimized when $\lambda - 1 = \sqrt{\log(\mathbb{E}[K]) / \rho}$ . If $\lambda - 1 < \sqrt{\log(\mathbb{E}[K]) / \rho}$ , then we can apply the monotonicity property of Renyi DP (Remark 5 and Lemma 10) and substitute in the bound with this optimal $\lambda$ . That is, we obtain the bound + +$$ +\varepsilon^ {\prime} \leq \left\{ \begin{array}{c l} \rho \cdot \lambda - (1 + \eta) \cdot \rho + 2 (1 + \eta) \cdot \sqrt {\rho \cdot \log (1 / \gamma)} + \frac {\log \mathbb {E} [ K ]}{\lambda - 1} & \text {i f} \lambda > 1 + \sqrt {\log (\mathbb {E} [ K ]) / \rho} \\ 2 \sqrt {\rho \cdot \log \mathbb {E} [ K ]} + 2 (1 + \eta) \sqrt {\rho \cdot \log (1 / \gamma)} - \eta \rho & \text {i f} \lambda \leq 1 + \sqrt {\log (\mathbb {E} [ K ]) / \rho} \end{array} \right.. +$$ + +![](images/94554a7facb6f2afe84e69360ee76f22bb7356ace7d202947989e44db65dc4d1.jpg) + +# Proof of the Poisson Distribution Bound + +Proof of Theorem 6. The probability generating function for the Poisson distribution is $f(x) = \mathbb{E}\left[x^{K}\right] = e^{\mu (x - 1)}$ . Thus $f^{\prime}(x) = \mu \cdot e^{\mu (x - 1)}$ . As in the previous proofs, let $x$ and $x^{\prime}$ be neighbouring inputs. Denote $Q = Q(x)$ , $Q^{\prime} = Q(x^{\prime})$ , $A = A(x)$ , and $A^{\prime} = A(x)$ . By Lemma 7, + +$$ +\begin{array}{l} \mathrm {D} _ {\lambda} \left(A \| A ^ {\prime}\right) \\ \leq \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime}) + \frac {1}{\lambda - 1} \log \left(f ^ {\prime} (q) ^ {\lambda} \cdot f ^ {\prime} (q ^ {\prime}) ^ {1 - \lambda}\right) \\ = \mathrm {D} _ {\lambda} \left(Q \| Q ^ {\prime}\right) + \frac {1}{\lambda - 1} \log \left(\mu \cdot e ^ {\mu \lambda (q - 1) + \mu (1 - \lambda) \left(q ^ {\prime} - 1\right)}\right) \\ = \mathrm {D} _ {\lambda} \left(Q \| Q ^ {\prime}\right) + \frac {\mu (\lambda q - (\lambda - 1) q ^ {\prime} - 1) + \log \mu}{\lambda - 1} \\ = \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime}) + \frac {\mu ((\lambda - 1) (1 - q ^ {\prime}) - \lambda (1 - q)) + \log \mu}{\lambda - 1} \\ \leq \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime}) + \frac {\mu ((\lambda - 1) (e ^ {\hat {\varepsilon}} (1 - q) + \hat {\delta}) - \lambda (1 - q)) + \log \mu}{\lambda - 1} \\ \end{array} +$$ + +(by our $(\hat{\varepsilon},\hat{\delta})$ -DP assumption on $Q$ and since $1 - q$ and $1 - q^{\prime}$ are postprocessings) + +$$ +\begin{array}{l} = \mathrm {D} _ {\lambda} \left(Q \| Q ^ {\prime}\right) + \mu \cdot (1 - q) \cdot \left(e ^ {\hat {\varepsilon}} - \frac {\lambda}{\lambda - 1}\right) + \mu \cdot \hat {\delta} + \frac {\log \mu}{\lambda - 1} \\ \leq \mathrm {D} _ {\lambda} \left(Q \| Q ^ {\prime}\right) + \mu \cdot \hat {\delta} + \frac {\log \mu}{\lambda - 1}, \\ \end{array} +$$ + +where the final inequality follows from our assumption that $e^{\hat{\varepsilon}} \leq 1 + \frac{1}{\lambda - 1}$ . + +![](images/1eda8924e0540fc09b2a88b459615f95ec699192889d878af93cc20c0dcec35c.jpg) + +# C CONDITIONAL SAMPLING APPROACH + +In the main text, we analyzed the approach where we run the underlying algorithm $Q$ a random number of times according to a carefully-chosen distribution and output the best result from these independent runs. An alternative approach – also studied by Liu & Talwar (2019) – is to start with a pre-defined threshold for a "good enough" output and to run $Q$ repeatedly until it produces such a result and then output that. This approach has some advantages, namely being simpler and avoiding the heavy-tailed behaviour of the logarithmic distribution while attaining the same kind of privacy guarantee. However, the disadvantage of this approach is that we must specify the acceptance threshold a priori. If we set the threshold too high, then we may have to keep running $Q$ for a long time.[11] If we set the threshold too low, then we may end up with a suboptimal output. + +We analyze this approach under Rényi DP, thereby extending the results of Liu & Talwar (2019). Our algorithm $A$ now works as follows. We start with a base algorithm $Q$ and a set of good outputs $S$ . Now $A(x)$ computes $y = Q(x)$ and, if $y \in S$ , then it returns $y$ and halts. Otherwise, $A$ repeats the procedure. This is equivalent to sampling from a conditional distribution $Q(x)|Q(x) \in S$ . The number of times $Q$ is run will follow a geometric distribution with mean $1 / Q(S)$ . + +Proposition 15. Let $\lambda \in (1,\infty)$ . Let $Q$ and $Q^{\prime}$ be probability distributions on $\Omega$ with $\mathrm{D}_{\lambda}(Q\| Q^{\prime}) < \infty$ . Let $S \subset \Omega$ have nonzero measure under $Q^{\prime}$ and also under $Q$ . Let $Q_{S}$ and $Q_{S}^{\prime}$ denote the conditional distributions of $Q$ and $Q^{\prime}$ respectively conditioned on being in the set $S$ . That is, $Q_{S}(E) = Q(E \cap S) / Q(S)$ and $Q_{S}^{\prime}(E) = Q^{\prime}(E \cap S) / Q^{\prime}(S)$ for all measurable $E \subset \Omega$ . Then, for all $p, q, r \in [1,\infty]$ satisfying $1/p + 1/q + 1/r = 1$ , we have + +$$ +\mathrm {D} _ {\lambda} \left(Q _ {S} \| Q _ {S} ^ {\prime}\right) \leq \frac {\lambda - 1 / p - 1 / r}{\lambda - 1} \mathrm {D} _ {r \cdot (\lambda - 1 / p)} \left(Q \| Q ^ {\prime}\right) + \frac {\lambda + 1 / q - 2}{\lambda - 1} \mathrm {D} _ {\lambda + 1 / q - 1} \left(Q ^ {\prime} \| Q\right) + \frac {1 / r + 1}{\lambda - 1} \log \left(\frac {1}{Q (S)}\right). +$$ + +Proof. For $x \in \Omega$ , denote the various distributional densities at $x$ (relative to some base measure) by $Q_S(x)$ , $Q_S'(x)$ , $Q(x)$ , and $Q'(x)$ . We have $Q_S(x) = Q(x)\mathbb{I}[x \in S] / Q(S)$ and $Q_S'(x) = Q'(x)\mathbb{I}[x \in S] / Q'(S)$ . Now we have + +$$ +\begin{array}{l} e ^ {(\lambda - 1) \mathrm {D} _ {\lambda} (Q _ {S} \| Q _ {S} ^ {\prime})} = \int_ {\Omega} Q _ {S} (x) ^ {\lambda} Q _ {S} ^ {\prime} (x) ^ {1 - \lambda} \mathrm {d} x \\ = Q (S) ^ {- \lambda} Q ^ {\prime} (S) ^ {\lambda - 1} \int_ {\Omega} \mathbb {I} [ x \in S ] Q (x) ^ {\lambda} Q ^ {\prime} (x) ^ {1 - \lambda} d x \\ \leq Q (S) ^ {- \lambda} Q ^ {\prime} (S) ^ {\lambda - 1} \left(\int_ {S} Q (x) \mathrm {d} x\right) ^ {1 / p} \left(\int_ {S} Q ^ {\prime} (x) \mathrm {d} x\right) ^ {1 / q} \left(\int_ {S} \left(Q (x) ^ {\lambda - 1 / p} Q ^ {\prime} (x) ^ {1 - \lambda - 1 / q}\right) ^ {r} \mathrm {d} x\right) ^ {1 / r} (\text {H o l d e r ' s i n e q u a l i t y}) \\ = Q (S) ^ {1 / p - \lambda} Q ^ {\prime} (S) ^ {1 / q + \lambda - 1} \left(\int_ {S} Q (x) ^ {r \lambda - r / p} Q ^ {\prime} (x) ^ {r - r \lambda - r / q} d x\right) ^ {1 / r} \\ = Q ^ {\prime} (S) ^ {\lambda_ {0}} Q (S) ^ {1 - \lambda_ {0}} Q (S) ^ {- 1 / r - 1} \left(\int_ {S} Q (x) ^ {\lambda_ {1}} Q ^ {\prime} (x) ^ {1 - \lambda_ {1}} \mathrm {d} x\right) ^ {1 / r} \\ \left(\lambda_ {0} := \lambda + 1 / q - 1, \lambda_ {1} := r \lambda - r / p\right) \\ \leq e ^ {(\lambda_ {0} - 1) \mathrm {D} _ {\lambda_ {0}} (Q ^ {\prime} \| Q)} \cdot Q (S) ^ {- 1 / r - 1} \cdot \left(e ^ {(\lambda_ {1} - 1) \mathrm {D} _ {\lambda_ {1}} (Q \| Q ^ {\prime})}\right) ^ {1 / r}. \\ \end{array} +$$ + +(Postprocessing & non-negativity) + +The number of parameters in Proposition 15 is excessive. Thus we provide some corollaries that simplify the expression somewhat. + +Corollary 16. Let $\lambda, Q, Q', S, Q_S, Q_S'$ be as in Proposition 15. The following inequalities all hold. + +$$ +\mathrm {D} _ {\infty} \left(Q _ {S} \| Q _ {S} ^ {\prime}\right) \leq \mathrm {D} _ {\infty} \left(Q \| Q ^ {\prime}\right) + \mathrm {D} _ {\infty} \left(Q ^ {\prime} \| Q\right). +$$ + +$$ +\mathrm {D} _ {\lambda} \left(Q _ {S} \| Q _ {S} ^ {\prime}\right) \leq \mathrm {D} _ {\lambda} \left(Q \| Q ^ {\prime}\right) + \frac {\lambda - 2}{\lambda - 1} \mathrm {D} _ {\lambda - 1} \left(Q ^ {\prime} \| Q\right) + \frac {2}{\lambda - 1} \log \left(\frac {1}{Q (S)}\right). +$$ + +$$ +\mathrm {D} _ {\lambda} \left(Q _ {S} \| Q _ {S} ^ {\prime}\right) \leq \mathrm {D} _ {\infty} \left(Q \| Q ^ {\prime}\right) + \frac {\lambda - 2}{\lambda - 1} \mathrm {D} _ {\lambda - 1} \left(Q ^ {\prime} \| Q\right) + \frac {1}{\lambda - 1} \log \left(\frac {1}{Q (S)}\right). +$$ + +$$ +\mathrm {D} _ {\lambda} \left(Q _ {S} \| Q _ {S} ^ {\prime}\right) \leq \frac {\lambda}{\lambda - 1} \mathrm {D} _ {\infty} \left(Q \| Q ^ {\prime}\right) + \mathrm {D} _ {\lambda} \left(Q ^ {\prime} \| Q\right) + \frac {1}{\lambda - 1} \log \left(\frac {1}{Q (S)}\right). +$$ + +$$ +\forall r \geq 1 \mathrm {D} _ {\lambda} (Q _ {S} \| Q _ {S} ^ {\prime}) \leq \mathrm {D} _ {r (\lambda - 1) + 1} (Q \| Q ^ {\prime}) + \frac {\lambda - 2}{\lambda - 1} \mathrm {D} _ {\lambda - 1} (Q ^ {\prime} \| Q) + \frac {1 / r + 1}{\lambda - 1} \log \left(\frac {1}{Q (S)}\right). +$$ + +The first inequality in Corollary 16 is essentially the result given by Liu & Talwar (2019): If $Q$ satisfies $\varepsilon$ -DP, then $A$ satisfies $2\varepsilon$ -DP. + +Figure 8 plots the guarantee of the second inequality in Corollary 16 when $\mathrm{D}_{\lambda}(Q\| Q^{\prime}) = 0.1\lambda$ and $\mathrm{D}_{\lambda -1}(Q^{\prime}\| Q) = 0.1(\lambda -1)$ . + +# D NEGATIVE RESULTS ON IMPROVEMENTS TO OUR ANALYSIS + +It is natural to wonder whether our results could be further improved. In this section, we give some examples that demonstrates that quantitatively there is little room for improvement. + +# D.1 WHY A FIXED NUMBER OF REPETITIONS DOES NOT RESULT IN GOOD PRIVACY. + +We first consider more closely the strawman approach discussed in Section 3.2: the base algorithm $Q$ is repeated a fixed number of times $k$ and we return the best output. This corresponds to picking $k$ from a point mass distribution. To understand why it performs so poorly from a privacy standpoint, we first apply our main result from Section 3.5 to the resulting point mass distribution. + +Point Mass: Suppose $K$ is just a point mass - i.e., $\mathbb{P}[K = k] = 1$ . So $A$ runs the algorithm $Q$ a deterministic number of times. Then the probability generating function (PGF) is $f(x) = x^{k}$ and its derivative is $f'(x) = k \cdot x^{k-1}$ . Let $Q$ denote the base algorithm. We abuse notation and let $Q = Q(x)$ and $Q' = Q(x')$ , where $x$ and $x'$ are neighbouring inputs. Similarly, let $A = A(x)$ and $A' = A(x')$ be the final output distributions obtained by running $Q$ and $Q'$ repeatedly $k$ times and returning the best result. We follow the same pattern of analysis that we applied to the other distributions in Theorems 2 and 6: Lemma 7 gives the bound + +$$ +\begin{array}{l} \mathrm {D} _ {\lambda} \left(A \| A ^ {\prime}\right) \leq \mathrm {D} _ {\lambda} \left(Q \| Q ^ {\prime}\right) + \frac {1}{\lambda - 1} \log \left(k \cdot \left(q ^ {\lambda} \cdot q ^ {\prime 1 - \lambda}\right) ^ {k - 1}\right) \\ \leq \mathrm {D} _ {\lambda} \left(Q \| Q ^ {\prime}\right) + \frac {1}{\lambda - 1} \log \left(k \cdot \left(e ^ {(\lambda - 1) \mathrm {D} _ {\lambda} \left(\operatorname {B e r n} (q) \| \operatorname {B e r n} \left(q ^ {\prime}\right)\right)}\right) ^ {k - 1}\right) \\ \leq \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime}) + \frac {1}{\lambda - 1} \log \left(k \cdot \left(e ^ {(\lambda - 1) \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime})}\right) ^ {k - 1}\right) \\ = k \cdot \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime}) + \frac {\log k}{\lambda - 1}, \\ \end{array} +$$ + +where the final inequality follows from the fact that $\operatorname{Bern}(q)$ and $\operatorname{Bern}(q')$ are postprocessings of $Q$ and $Q'$ respectively. + +This bound is terrible. In fact, it is slightly worse than a naive composition analysis which would give $\mathrm{D}_{\lambda}(A \| A') \leq \mathrm{D}_{\lambda}\left(Q^{\otimes k}\big\| Q^{\prime \otimes k}\right) = k \cdot \mathrm{D}_{\lambda}(Q\| Q^{\prime})$ . It shows that a deterministic number of repetitions does not yield good privacy parameters, at least with this analysis. + +It is surprising that running the base algorithm $Q$ a fixed number of times $k$ and returning the best output performs so poorly from a privacy standpoint. We will now give a simple example that demonstrates that this is inherent and not just a limitation of our analysis. Liu & Talwar (2019, Appendix B) give a similar example. + +Proposition 17. For all $\varepsilon >0$ , there exists a $\varepsilon$ -DP algorithm $Q:\mathcal{X}^n\to \mathcal{Y}$ such that the following holds. Define an algorithm $A:\mathcal{X}^n\to \mathcal{Y}$ that runs $Q$ a fixed number of times $k$ and returns the best output from these runs. Then $A$ is not $\hat{\varepsilon}$ -DP for any $\hat{\varepsilon} < k\varepsilon$ . Furthermore, for all $\lambda >1$ , $A$ is not $(\lambda ,\hat{\varepsilon} (\lambda))$ -Renyi DP for any $\hat{\varepsilon} (\lambda) < \varepsilon '( \lambda)$ , where + +$$ +\varepsilon^ {\prime} (\lambda) = k \varepsilon - \frac {k \cdot \log (1 + e ^ {- \varepsilon})}{\lambda - 1}. +$$ + +Proof. The base algorithm is simply randomized response. We will let $\mathcal{Y} = \{1,2\}$ with the total order preferring 1, then 2. We will define a pair of distributions $Q$ and $Q^{\prime}$ on $\{1,2\}$ and then the base algorithm is simply set so that these are its output distributions on a pair of neighbouring inputs. + +We let + +$$ +\begin{array}{l} Q = \left(\frac {1}{1 + e ^ {\varepsilon}}, \frac {e ^ {\varepsilon}}{1 + e ^ {\varepsilon}}\right), \\ Q ^ {\prime} = \left(\frac {e ^ {\varepsilon}}{1 + e ^ {\varepsilon}}, \frac {1}{1 + e ^ {\varepsilon}}\right). \\ \end{array} +$$ + +Then $\mathrm{D}_{\infty}(Q\| Q^{\prime}) = \mathrm{D}_{\infty}(Q^{\prime}\| Q) = \varepsilon$ . Thus we can ensure that the base algorithm yielding this pair of distributions is $\varepsilon$ -DP. + +Now we look at the corresponding pair of distributions from repeating the base algorithm $k$ times. We have + +$$ +A = \left(1 - \left(\frac {e ^ {\varepsilon}}{1 + e ^ {\varepsilon}}\right) ^ {k}, \left(\frac {e ^ {\varepsilon}}{1 + e ^ {\varepsilon}}\right) ^ {k}\right), +$$ + +$$ +A ^ {\prime} = \left(1 - \left(\frac {1}{1 + e ^ {\varepsilon}}\right) ^ {k}, \left(\frac {1}{1 + e ^ {\varepsilon}}\right) ^ {k}\right). +$$ + +The first part of the result follows: + +$$ +\mathrm {D} _ {\infty} \left(A \| A ^ {\prime}\right) \geq \log \left(\frac {\left(\frac {e ^ {\varepsilon}}{1 + e ^ {\varepsilon}}\right) ^ {k}}{\left(\frac {1}{1 + e ^ {\varepsilon}}\right) ^ {k}}\right) = k \varepsilon . +$$ + +For all $\lambda > 1$ , + +$$ +\begin{array}{l} e ^ {(\lambda - 1) \mathrm {D} _ {\lambda} (A \| A ^ {\prime})} \geq \left(\left(\frac {e ^ {\varepsilon}}{1 + e ^ {\varepsilon}}\right) ^ {k}\right) ^ {\lambda}. \left(\left(\frac {1}{1 + e ^ {\varepsilon}}\right) ^ {k}\right) ^ {1 - \lambda} \\ = e ^ {\varepsilon k \lambda} \cdot (1 + e ^ {\varepsilon}) ^ {- k} \\ \end{array} +$$ + +Hence + +$$ +\mathrm {D} _ {\lambda} \left(A \| A ^ {\prime}\right) \geq \frac {\varepsilon k \lambda - k \cdot \log \left(1 + e ^ {\varepsilon}\right)}{\lambda - 1} = k \varepsilon - \frac {k \cdot \log \left(1 + e ^ {- \varepsilon}\right)}{\lambda - 1}. +$$ + +![](images/580962b3e498c8346c2fd94b19b56c057f126d9f7467b4f432275cb60bdea0cc.jpg) + +The second part of Proposition 17 shows that this problem is not specific to pure DP. For $\lambda \geq 1 + 1 / \varepsilon$ , we have $\varepsilon'(\lambda) = \Omega(k\varepsilon)$ , so we are paying linearly in $k$ . + +However, Proposition 17 is somewhat limited to pure $\varepsilon$ -DP or at least $(\lambda, \varepsilon(\lambda))$ -RDP with not-too-small values of $\lambda$ . This is because the "bad" event is relatively low-probability. Specifically, the high privacy loss event has probability $(1 + e^{-\varepsilon})^{-k}$ . This is small, unless $\varepsilon \geq \Omega(\log k)$ . + +We can change the example to make the bad event happen with constant probability. However, the base algorithm will also not be pure $\varepsilon$ -DP any more. Specifically, we can replace the two distributions in Proposition 17 with the following: + +$$ +Q = (1 - \exp (- 1 / k), \exp (- 1 / k)), +$$ + +$$ +Q ^ {\prime} = (1 - \exp (- \varepsilon_ {0} - 1 / k), \exp (- \varepsilon_ {0} - 1 / k)). +$$ + +If we repeat this base algorithm a fixed number of times $k$ , then the corresponding pair of distributions is given by + +$$ +A = (1 - \exp (- 1), \exp (- 1)), +$$ + +$$ +A ^ {\prime} = (1 - \exp (- k \varepsilon_ {0} - 1), \exp (- k \varepsilon_ {0} - 1)). +$$ + +Now we have $\mathrm{D}_{\infty}(A \| A') = k\varepsilon_0$ and the bad event happens with probability $e^{-1} \approx 0.36$ . On the other hand, $\mathrm{D}_{\infty}(Q \| Q') = \varepsilon_0$ like before, but $\mathrm{D}_{\infty}(Q' \| Q) = \log (1 - \exp (-\varepsilon_0 - 1 / k)) - \log (1 - \exp (-1 / k)) \approx \log (k\varepsilon_0 + 1)$ . But we still have a good guarantee in terms of Rényi divergences. In particular, $\mathrm{D}_1(Q' \| Q) \leq (\varepsilon_0 + 1 / k)\log (k\varepsilon_0 + 1)$ , and we can set $\varepsilon_0 \leq o(1 / \log k)$ to ensure that we get reasonable $(\lambda, \varepsilon(\lambda))$ -RDP guarantees for small $\lambda$ . + +At a higher level, it should not be a surprise that this negative example is relatively brittle. Our positive results show that it only takes a very minor adjustment to the number of repetitions to obtain significantly tighter privacy guarantees for hyperparameter tuning than what one would obtain from naive composition. In particular, running a fixed number of times $k$ versus running $\mathrm{Poisson}(k)$ times is not that different, but our positive results show that it already circumvents this problem in general. + +We also remark that composition behaves differently in the low-order RDP or approx DP regime relative to the pure-DP or high-order RDP regime covered by Proposition 17. Thus the naive composition baseline we compare to is also shifting from basic composition ( $\varepsilon$ -DP becomes $k\varepsilon$ -DP under $k$ -fold repetition (Dwork et al., 2006b)) to advanced composition ( $\varepsilon$ -DP becomes $(O(\varepsilon \cdot \sqrt{k \cdot \log(1 / \delta)}), \delta)$ -DP (Dwork et al., 2010)). Proving tightness of basic composition is easy, but proving tightness for advanced composition is non-trivial (in general, it relies on the machinery of fingerprinting codes (Bun et al., 2014)). This means it is not straightforward to extend Proposition 17 to this regime. + +# D.2 TIGHT EXAMPLE FOR CONDITIONAL SAMPLING. + +We are also interested in the tightness of our generic results. We begin by studying the conditional sampling approach outlined in Appendix C. This approach is simpler and it is therefore easier to give a tight example. This also proves to be instructive for the random repetition approach in Section 3. + +Our tight example for the conditional sampling approach consists of a pair of distributions $Q$ and $Q'$ . These should be thought of as the output distributions of the algorithm $Q$ on two neighbouring inputs. The distributions are supported on only three points. Such a small output space seems contrived, but it should be thought of as representing a partition of a large output space into three sets. The first set is where the privacy loss is large and the second set is where the privacy loss is very negative, while the third set is everything in between. + +Fix $s, t > 0$ and $a \in [0,1/4]$ . Note $(1 - 2a)^{-1} \leq e^{3a}$ . Let + +$$ +Q = \left(a \cdot e ^ {- s}, a \cdot e ^ {- s}, 1 - 2 a \cdot e ^ {- s}\right), +$$ + +$$ +Q ^ {\prime} = \left(a \cdot e ^ {- s - t}, a, 1 - a - a \cdot e ^ {- s - t}\right), +$$ + +$$ +S = \{1, 2 \} \subset \{1, 2, 3 \}, +$$ + +$$ +Q _ {S} = \left( \begin{array}{c} 1 \\ \hline 2 \end{array} , \frac {1}{2}\right), +$$ + +$$ +Q _ {S} ^ {\prime} = \left(\frac {e ^ {- s - t}}{1 + e ^ {- s - t}}, \frac {1}{1 + e ^ {- s - t}}\right). +$$ + +Intuitively, the set $S$ corresponds to the outputs that (1) have large privacy loss or (2) very negative privacy loss and we exclude (3) the outputs with middling privacy loss. Once we condition on $S$ we still have the outputs (1) with large privacy loss, but that privacy loss is further increased because of the renormalization. Specifically, the negative privacy loss means the renormalization constants are very different $-Q(S) = a \cdot 2e^{-s} \ll Q'(S) = a \cdot (1 + e^{-s - t})$ if $s$ is large. In effect, the negative privacy loss becomes a positive privacy loss that is added to the already large privacy loss. + +![](images/9f8a06b799974bc4d81e9e74d525cab52c277345d2f59385b2be83f0e9ef9cd1.jpg) +Figure 8: Upper and lower bounds for Rényi DP of conditional sampling. For each $\lambda$ , we pick the parameters $s$ and $t$ such that $\mathrm{D}_{\lambda}(Q\| Q^{\prime}) = \mathrm{D}_{\lambda}(Q^{\prime}\| Q) = 0.1\cdot \lambda$ and we plot the upper bound from the second inequation in Corollary 16 along with the exact value of $\mathrm{D}_{\lambda}(Q_S\| Q_S^{\prime})$ . + +We make the above intuition more precise by calculating the various quantities. For all $\lambda > 1$ , we have + +$$ +Q (S) = 2 a \cdot e ^ {- s}, +$$ + +$$ +\begin{array}{l} e ^ {(\lambda - 1) D _ {\lambda} (Q \| Q ^ {\prime})} = a \cdot e ^ {- s \lambda - (s + t) (1 - \lambda)} + a \cdot e ^ {- s \lambda} + (1 - 2 a \cdot e ^ {- s}) ^ {\lambda} (1 - a - a \cdot e ^ {- s - t}) ^ {1 - \lambda} \\ \leq a \cdot e ^ {(\lambda - 1) t - s} + a e ^ {- s \lambda} + e ^ {- 2 a e ^ {- s \lambda}} (1 - 2 a) ^ {1 - \lambda} \\ \leq a \cdot e ^ {(\lambda - 1) t - s} + a + e ^ {3 a (\lambda - 1)} \\ \approx a \cdot e ^ {(\lambda - 1) t - s} \quad (\text {a s s u m i n g} t \text {i s l a r g e}) \\ = \frac {1}{2} Q (S) \cdot e ^ {(\lambda - 1) t}, \\ \end{array} +$$ + +$$ +\Rightarrow \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime}) \lesssim t - \frac {\log (2 / Q (S))}{\lambda - 1}, +$$ + +$$ +\begin{array}{l} e ^ {(\lambda - 1) \mathrm {D} _ {\lambda} (Q ^ {\prime} \| Q)} = a \cdot e ^ {- s (1 - \lambda) - (s + t) \lambda} + a \cdot e ^ {- s (1 - \lambda)} + (1 - 2 a \cdot e ^ {- s}) ^ {1 - \lambda} (1 - a - a \cdot e ^ {- s - t}) ^ {\lambda} \\ \leq a \cdot e ^ {- t \lambda - s} + a \cdot e ^ {s (\lambda - 1)} + e ^ {3 a \cdot e ^ {- s} \cdot (\lambda - 1)} \cdot e ^ {- a \lambda} \\ \approx a \cdot e ^ {s (\lambda - 1)} \quad \text {(a s s u m i n g} s \text {i s l a r g e}) \\ = \frac {1}{2} Q (S) \cdot e ^ {s \lambda} \\ \end{array} +$$ + +$$ +\Rightarrow \mathrm {D} _ {\lambda} \left(Q ^ {\prime} \| Q\right) \lesssim s + \frac {s - \log (2 / Q (S))}{\lambda - 1}, +$$ + +$$ +\begin{array}{l} e ^ {(\lambda - 1) \mathrm {D} _ {\lambda} \left(Q _ {S} \| Q _ {S} ^ {\prime}\right)} = 2 ^ {- \lambda} (1 + e ^ {- s - t}) ^ {\lambda - 1} \left(e ^ {(\lambda - 1) (s + t)} + 1\right) \\ \geq 2 ^ {- \lambda} \cdot e ^ {(\lambda - 1) (s + t)}, \\ \Longrightarrow \mathrm {D} _ {\lambda} \left(Q _ {S} \| Q _ {S} ^ {\prime}\right) \geq s + t - \frac {\lambda \log 2}{\lambda - 1}. \\ \end{array} +$$ + +We contrast this with our upper bound from the second part of Corollary 16: + +$$ +\begin{array}{l} \mathrm {D} _ {\lambda} \left(Q _ {S} \| Q _ {S} ^ {\prime}\right) \leq \mathrm {D} _ {\lambda} \left(Q \| Q ^ {\prime}\right) + \frac {\lambda - 2}{\lambda - 1} \mathrm {D} _ {\lambda - 1} \left(Q ^ {\prime} \| Q\right) + \frac {2}{\lambda - 1} \log \left(\frac {1}{Q (S)}\right). \\ \lesssim t - \frac {\log (2 / Q (S))}{\lambda - 1} + \frac {\lambda - 2}{\lambda - 1} \left(s + \frac {s - \log (2 / Q (S))}{\lambda - 2}\right) + \frac {2 \log (1 / Q (S))}{\lambda - 1} \\ = s + t - \frac {2 \log 2}{\lambda - 1}. \\ \end{array} +$$ + +This example shows that our upper bound is tight up to small factors, namely the lower order terms we ignore with $\approx$ and $\frac{\lambda - 2}{\lambda - 1}\log 2$ . Figure 8 illustrates how the upper and lower bounds compare. + +# D.3 TIGHTNESS OF OUR GENERIC RESULT. + +Now we consider the setting from Section 3 where our base algorithm is run repeatedly a random number of times and the best result is given as output. + +Let $Q: \mathcal{X}^n \to \mathcal{Y}$ denote the base algorithm. Assume $\mathcal{Y}$ is totally ordered. Let $K \in \mathbb{N}$ be a random variable and let $f(x) = \mathbb{E}[x^K]$ be its probability generating function. Define $A: \mathcal{X}^n \to \mathcal{Y}$ to be the algorithm that runs $Q$ repeatedly $K$ times and returns the best output. + +For a tight example, we again restrict our attention to distributions supported on three points: + +$$ +\begin{array}{l} Q = Q (x) = (1 - b - c, b, c), \\ Q ^ {\prime} = Q \left(x ^ {\prime}\right) = \left(1 - b ^ {\prime} - c ^ {\prime}, b ^ {\prime}, c ^ {\prime}\right), \\ A = A (x) = (1 - f (b + c), f (b + c) - f (c), f (c)), \\ A ^ {\prime} = A \left(x ^ {\prime}\right) = \left(1 - f \left(b ^ {\prime} + c ^ {\prime}\right), f \left(b ^ {\prime} + c ^ {\prime}\right) - f \left(c ^ {\prime}\right), f \left(c ^ {\prime}\right)\right). \\ \end{array} +$$ + +Here the total ordering prefers the first option (corresponding to the first coordinate probability), then the second, and then the third, which implies the expressions for $A$ and $A'$ . Note that the probability values are not necessarily ordered the same way as the ordering on outcomes. + +Now we must set these four values to show tightness of our results. + +We make the first-order approximation + +$$ +\begin{array}{l} A \approx (1 - f (b + c), f ^ {\prime} (c) \cdot b, f (c)), \\ A ^ {\prime} \approx (1 - f (b ^ {\prime} + c ^ {\prime}), f ^ {\prime} (c ^ {\prime}) \cdot b ^ {\prime}, f (c ^ {\prime})). \\ \end{array} +$$ + +We take this approximation and get + +$$ +\begin{array}{l} e ^ {(\lambda - 1) \mathrm {D} _ {\lambda} (A \| A ^ {\prime})} \gtrsim (f ^ {\prime} (c) \cdot b) ^ {\lambda} \cdot (f ^ {\prime} (c ^ {\prime}) \cdot b ^ {\prime}) ^ {1 - \lambda} \\ = e ^ {(\lambda - 1) \mathrm {D} _ {\lambda} (b \| b ^ {\prime})} \cdot \left(f ^ {\prime} (c)\right) ^ {\lambda} \cdot \left(f ^ {\prime} \left(c ^ {\prime}\right)\right) ^ {1 - \lambda} \\ \approx e ^ {(\lambda - 1) \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime})} \cdot \left(f ^ {\prime} (c)\right) ^ {\lambda} \cdot \left(f ^ {\prime} (c ^ {\prime})\right) ^ {1 - \lambda}, \\ \end{array} +$$ + +where the final approximation assumes that the second term is dominant in the equation + +$$ +e ^ {(\lambda - 1) \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime})} = (1 - b - c) ^ {\lambda} \cdot (1 - b ^ {\prime} - c ^ {\prime}) ^ {1 - \lambda} + b ^ {\lambda} \cdot (b ^ {\prime}) ^ {1 - \lambda} + (c) ^ {\lambda} \cdot (c ^ {\prime}) ^ {1 - \lambda}. +$$ + +Contrast this with our upper bound (Lemma 7), which says + +$$ +e ^ {(\lambda - 1) \mathrm {D} _ {\lambda} (A \| A ^ {\prime})} \leq e ^ {(\lambda - 1) \mathrm {D} _ {\lambda} (Q \| Q ^ {\prime})} \cdot (f ^ {\prime} (q)) ^ {\lambda} \cdot (f ^ {\prime} (q ^ {\prime})) ^ {1 - \lambda}, +$$ + +where $q$ and $q'$ are arbitrary postprocessings of $Q$ and $Q'$ . In particular, we can set the values so that $q = c$ and $q' = c'$ . This is not a formal proof, since we make imprecise approximations. But it illustrates that our main generic result (Lemma 7) is tight up to low order terms. + +# D.4 SELECTION & LOWER BOUNDS + +Private hyperparameter tuning is a generalization of the private selection problem. In the private selection problem we are given a utility function $u: \mathcal{X}^n \times [m] \to \mathbb{R}$ which has sensitivity 1 in its first argument - i.e., for all neighbouring $x, x' \in \mathcal{X}^n$ and all $j \in [m] = \{1, 2, \dots, m\}$ , we have $|u(x, j) - u(x', j)| \leq 1$ . The goal is to output and approximation to $\arg \max_{j \in [m]} u(x, j)$ subject to differential privacy. + +The standard algorithm for private selection is the exponential mechanism (McSherry & Talwar, 2007). The exponential mechanism is defined by + +$$ +\forall j \in [ m ] \quad \mathbb {P} [ M (x) = j ] = \frac {\exp \left(\frac {\varepsilon}{2} u (x , j)\right)}{\sum_ {\ell \in [ m ]} \exp \left(\frac {\varepsilon}{2} u (x , \ell)\right)}. +$$ + +It provides $(\varepsilon, 0)$ -DP and, at the same time, $\frac{1}{8}\varepsilon^2$ -zCDP (Rogers & Steinke, 2021). On the utility side, we have the guarantees + +$$ +\mathbb {E} \left[ u (x, M (x)) \right] \geq \max _ {j \in [ m ]} u (x, j) - \frac {2}{\varepsilon} \log m, +$$ + +$$ +\mathbb {P} \left[ u (x, M (x)) \geq \max _ {j \in [ m ]} u (x, j) - \frac {2}{\varepsilon} \log \left(\frac {m}{\beta}\right) \right] \geq 1 - \beta +$$ + +for all inputs $x$ and all $\beta > 0$ (Bassily et al., 2021, Lemma 7.1) (Dwork & Roth, 2014, Theorem 3.11). + +It is also well-known that the exponential mechanism is optimal up to constants. That is, $(\varepsilon, 0)$ -DP selection entails an additive error of $\Omega (\log (m) / \varepsilon)$ (Steinke & Ullman, 2017). + +Our results can be applied to the selection problem. The base algorithm $Q(x)$ will simply pick an index $j \in [m]$ uniformly at random and the privately estimate $u(x, j)$ by adding Laplace or Gaussian noise $\xi$ and output the pair $(j, u(x, j) + \xi)$ . The total order on the output space $[m] \times \mathbb{R}$ simply selects for the highest estimated utility (breaking ties arbitrarily). If we take $\xi$ to be Laplace noise with scale $1 / \varepsilon$ , then $Q$ is $(\varepsilon, 0)$ -DP. + +Applying Corollary 3 yields a $((2 + \eta)\varepsilon, 0)$ -DP algorithm $A$ with the following utility guarantee. Let $K$ be the number of repetitions and let $(j_1, u(x, j_1) + \xi_1), \dots, (j_K, u(x, j_K) + \xi_K)$ be the outputs from the runs of the base algorithm. The probability that the repeated algorithm $A$ will consider $j_* := \arg \max_{j \in [m]} u(x, j)$ is $\mathbb{P}[j_* \in \{j_1, \dots, j_K\}] = 1 - \frac{\mathbb{E}_K}{K} \left[ \prod_{k \in [K]} \mathbb{P}[j_* \neq j_k] \right] = 1 - f(1 - 1/m)$ , where $f(x) = \mathbb{E}[x^K]$ is the probability generating function of the number of repetitions $K$ . For each noise sample, we have $\forall k \mathbb{P}[|\xi_k| \leq t] \geq 1 - e^{-\varepsilon t}$ for all $t > 0$ . Thus, for all $t > 0$ , the probability that all noise samples are smaller than $t$ is $\mathbb{P}[\forall k \in [K] |\xi_k| \leq t] = f(1 - e^{-\varepsilon t})$ . By a union bound, we have $\mathbb{P}[u(x, M(x)) \geq u(x, j_*) - 2t] \geq f(1 - e^{-\varepsilon t}) - f(1 - 1/m)$ for all $t > 0$ . Setting $\eta = 0$ , yields $f(x) = \frac{\log(1 - (1 - \gamma)x)}{\log\gamma}$ , so $\mathbb{P}[u(x, M(x)) \geq u(x, j_*) - 2t] \geq \frac{1}{\log(1/\gamma)} \log \left( \frac{1 + \frac{1 - \gamma}{\gamma} \cdot \frac{1}{m}}{1 + \frac{1 - \gamma}{\gamma} \cdot e^{-\varepsilon t}} \right)$ . Now set $t = \log(m^{10} - 1)/\varepsilon$ and $\gamma = \frac{1}{\exp(\varepsilon t) + 1} = m^{-10}$ so that $\frac{1 - \gamma}{\gamma} e^{-\varepsilon t} = 1$ and $\frac{1 - \gamma}{\gamma} \frac{1}{m} = m^9 - \frac{1}{m}$ . Then $\mathbb{P}[u(x, M(x)) \geq u(x, j_*) - \frac{20}{\varepsilon} \log m] \geq \frac{1}{10 \log m} \log \left( \frac{m^9 + 1 - 1/m}{2} \right) \geq \frac{9}{10} - \frac{1}{10 \log_2 m}$ . That is, we can match the result of the exponential mechanism up to (large) constants. In particular, this means that the lower bounds for selection translate to our results – i.e., our results are tight up to constants. + +# E EXTENDING OUR RESULTS TO APPROXIMATE DP + +Our results are all in the framework of Rényi DP. A natural question is what can be said if the base algorithm instead only satisfies approximate DP – i.e., $(\varepsilon, \delta)$ -DP with $\delta > 0$ . Liu & Talwar (2019) considered these questions and gave several results. We now briefly show how to our results can be extended in a black-box fashion to this setting. + +We begin by defining approximate Rényi divergences and approximate Rényi DP: + +Definition 18 (Approximate Rényi Divergence). Let $P$ and $Q$ be probability distributions over $\Omega$ . Let $\lambda \in [1, \infty]$ and $\delta \in [0, 1]$ . We define + +$$ +\mathrm {D} _ {\lambda} ^ {\delta} \left(P \| Q\right) = \inf \left\{\mathrm {D} _ {\lambda} \left(P ^ {\prime} \| Q ^ {\prime}\right): P = (1 - \delta) P ^ {\prime} + \delta P ^ {\prime \prime}, Q = (1 - \delta) Q ^ {\prime} + \delta Q ^ {\prime \prime} \right\}, +$$ + +where $P = (1 - \delta)P' + \delta P''$ denotes the fact that $P$ can be expressed as a convex combination of two distributions $P'$ and $P''$ with weights $1 - \delta$ and $\delta$ respectively. + +Definition 19 (Approximate Rényi Differential Privacy). A randomized algorithm $M: \mathcal{X}^n \to \mathcal{Y}$ is $\delta$ -approximately $(\lambda, \varepsilon)$ -Rényi differentially private if, for all neighbouring pairs of inputs $x, x' \in \mathcal{X}^n$ , $\mathrm{D}_{\lambda}^{\delta}(M(x) \| M(x')) \leq \varepsilon$ . + +Definition 19 is an extension of the definition of approximate zCDP (Bun & Steinke, 2016). Some remarks about the basic properties of approximate RDP are in order: + +- $(\varepsilon, \delta)$ -DP is equivalent to $\delta$ -approximate $(\infty, \varepsilon)$ -RDP. +- $(\varepsilon, \delta)$ -DP implies $\delta$ -approximate $(\lambda, \frac{1}{2}\varepsilon^2\lambda)$ -RDP for all $\lambda \in (1, \infty)$ . +- $\delta$ -approximate $(\lambda, \varepsilon)$ -RDP implies $(\hat{\varepsilon}, \hat{\delta})$ -DP for + +$$ +\hat {\delta} = \delta + \frac {\exp ((\lambda - 1) (\hat {\varepsilon} - \varepsilon))}{\lambda} \cdot \left(1 - \frac {1}{\lambda}\right) ^ {\lambda - 1}. +$$ + +- $\delta$ -approximate $(\lambda, \varepsilon)$ -Rényi differential privacy is closed under postprocessing. +- If $M_1$ is $\delta_1$ -approximately $(\lambda, \varepsilon_1)$ -Rényi differentially private and $M_2$ is $\delta_2$ -approximately $(\lambda, \varepsilon_2)$ -Rényi differentially private, then their composition is $(\delta_1 + \delta_2)$ -approximately $(\lambda, \varepsilon_1 + \varepsilon_2)$ -RDP. + +Our results for Rényi DP can be extended to approximate Rényi DP by the following Lemma. + +Lemma 20. Assume $\mathcal{V}$ is a totally ordered set. For a distribution $Q$ on $\mathcal{V}$ and a random variable $K$ supported on $\mathbb{N} \cup \{0\}$ , define $A_{Q}^{K}$ as follows. First we sample $K$ . Then we sample from $Q$ independently $K$ times and output the best of these samples. This output is a sample from $A$ . + +Let $Q, Q', Q_{1 - \delta_0}, Q_{\delta_0}, Q_{1 - \delta_0}', Q_{\delta_0}'$ be distributions on $\mathcal{V}$ satisfying $Q = (1 - \delta_0)Q_{1 - \delta_0} + \delta_0Q_{\delta_0}$ and $Q' = (1 - \delta_0)Q_{1 - \delta_0}' + \delta_0Q_{\delta_0}'$ . Let $K$ be a random variable on $\mathbb{N} \cup \{0\}$ and let $f(x) = \mathbb{E}\left[x^K\right]$ be the probability generating function of $K$ . Define a random variable $K'$ on $\mathbb{N} \cup \{0\}$ by $\mathbb{P}[K' = k] = \mathbb{P}[K = k] \cdot (1 - \delta_0)^k / f(1 - \delta_0)$ . + +Then, for all $\lambda \geq 1$ , we have + +$$ +\mathrm {D} _ {\lambda} ^ {\delta} \left(A _ {Q} ^ {K} \| A _ {Q ^ {\prime}} ^ {K}\right) \leq \mathrm {D} _ {\lambda} \left(A _ {Q _ {1 - \delta_ {0}}} ^ {K ^ {\prime}} \| A _ {Q _ {1 - \delta_ {0}} ^ {\prime}} ^ {K ^ {\prime}}\right) +$$ + +where + +$$ +\delta = 1 - f (1 - \delta_ {0}). +$$ + +How do we use this lemma? We should think of $A_{Q}^{K}$ as representing the algorithm we want to analyze. The base algorithm $Q$ satisfies $\delta_0$ -approximate $(\lambda, \varepsilon)$ -RDP. The above lemma says it suffices to analyze the algorithm $A_{\tilde{Q}}^{K'}$ where $\tilde{Q}$ satisfies $(\lambda, \varepsilon)$ -RDP. We end up with a $\delta$ -approximate RDP result, where the final $\delta$ depends on $\delta_0$ and the PGF of $K$ . + +As an example, we can combine Lemma 20 with Theorem 6 to obtain the following result for the approximate case. + +Corollary 21. Let $Q:\mathcal{X}^n\to \mathcal{Y}$ be a randomized algorithm satisfying $(\varepsilon_0,\delta_0)$ -DP. Assume $\mathcal{V}$ is totally ordered. Let $\mu >0$ + +Define an algorithm $A: \mathcal{X}^n \to \mathcal{Y}$ as follows. Draw $K$ from a Poisson distribution with mean $\mu$ . Run $Q(x)$ repeatedly $K$ times. Then $A(x)$ returns the best value from the $K$ runs. (If $K = 0$ , $A(x)$ returns some arbitrary output independent from the input $x$ .) + +For all $\lambda \leq 1 + \frac{1}{e^{\varepsilon} - 1}$ , the algorithm $A$ satisfies $\delta'$ -approximate $(\lambda, \varepsilon')$ -RDP where + +$$ +\varepsilon^ {\prime} = \varepsilon_ {0} + (e ^ {\varepsilon_ {0}} - 1) \cdot \log \mu , +$$ + +$$ +\delta^ {\prime} = 1 - e ^ {- \mu \cdot \delta_ {0}} \leq \mu \cdot \delta_ {0}. +$$ + +Proof of Lemma 20. For a distribution $P$ on $\mathcal{V}$ and an integer $k\geq 0$ , let $\max P^k$ denote the distribution on $\mathcal{V}$ obtained by taking $k$ independent samples from $P$ and returning the maximum value per the total ordering on $\mathcal{V}$ . (If $k = 0$ , this is some arbitrary fixed distribution.) + +Using this notation, we can express $A_{Q}^{K}$ as a convex combination: + +$$ +A _ {Q} ^ {K} = \sum_ {k = 0} ^ {\infty} \mathbb {P} [ K = k ] \max Q ^ {k}. +$$ + +Suppose $P = (1 - \delta)P' + \delta P''$ is a convex combination. We can view sampling from $P$ as a two-step process: first we sample a Bernoulli random variable $B \in \{0,1\}$ with expectation $\delta$ ; if $B = 0$ , we return a sample from $P'$ and, if $B = 1$ , we return a sample from $P''$ . Thus, if we draw $k$ independent samples from $P$ like this, then with probability $(1 - \delta)^k$ all of these Bernoullis are 0 and we generate $k$ samples from $P'$ ; otherwise, we generate some mix of samples from $P'$ and $P''$ . Hence we can write $\max P^k = (1 - \delta)^k\max (P')^k + (1 - (1 - \delta)^k)P'''$ for some distribution $P'''$ . + +It follows that we can express + +$$ +\begin{array}{l} A _ {Q} ^ {K} = \sum_ {k = 0} ^ {\infty} \mathbb {P} [ K = k ] \max Q ^ {k} \\ = \sum_ {k = 0} ^ {\infty} \mathbb {P} [ K = k ] \left((1 - \delta_ {0}) ^ {k} \max Q _ {1 - \delta_ {0}} ^ {k} + (1 - (1 - \delta_ {0}) ^ {k}) P _ {k}\right) \\ = f (1 - \delta_ {0}) A _ {Q} ^ {K ^ {\prime}} + (1 - f (1 - \delta_ {0})) P _ {*} \\ \end{array} +$$ + +for some distributions $P_0, P_1, \dots$ and $(1 - f(1 - \delta_0))P_* = \sum_{k=0}^{\infty} \mathbb{P}[K = k](1 - (1 - \delta_0)^k)P_k$ . + +Similarly, we can express $A_{Q'}^{K} = f(1 - \delta_0)A_{Q_{1 - \delta_0}}^{K'} + (1 - f(1 - \delta_0))P_*'$ for some distribution $P_*'$ . + +Using these convex combinations we have, by the definition of approximate Rényi divergence, + +$$ +\mathrm {D} _ {\lambda} ^ {\delta} \left(A _ {Q} ^ {K} \big \| A _ {Q ^ {\prime}} ^ {K}\right) \leq \mathrm {D} _ {\lambda} \left(A _ {Q _ {1 - \delta_ {0}}} ^ {K ^ {\prime}} \big \| A _ {Q _ {1 - \delta_ {0}} ^ {\prime}} ^ {K ^ {\prime}}\right) +$$ + +as $\delta = 1 - f(1 - \delta_0)$ + +![](images/94f226aeb97cb6852569cb42b3729a3deb96b917612f40183873d726c57f7dbc.jpg) + +Proof of Corollary 21. Fix neighbouring inputs $x, x'$ and let $Q = Q(x)$ and $Q' = Q(x')$ be the corresponding pair of output distributions from the base algorithm. Then, in the notation of Lemma 20, $A(x) = A_{Q}^{K}$ and $A(x') = A_{Q'}^{K}$ for $K \sim \mathrm{Poisson}(\mu)$ . Setting $\delta' = 1 - f(1 - \delta_0) = 1 - e^{-\mu \cdot \delta_0} \leq \mu \cdot \delta_0$ , we have + +$$ +\mathrm {D} _ {\lambda} ^ {\delta^ {\prime}} \left(A (x) \| A (x ^ {\prime})\right) = \mathrm {D} _ {\lambda} ^ {\delta^ {\prime}} \left(A _ {Q} ^ {K} \| A _ {Q ^ {\prime}} ^ {K}\right) \leq \mathrm {D} _ {\lambda} \left(A _ {Q _ {1 - \delta_ {0}}} ^ {K ^ {\prime}} \| A _ {Q _ {1 - \delta_ {0}} ^ {\prime}} ^ {K ^ {\prime}}\right) +$$ + +where $K^{\prime}\sim \mathrm{Poisson}(\mu \cdot (1 - \delta_{0}))$ + +Now we apply Theorem 6 to the algorithm corresponding to the pair of distributions $A_{Q_{1 - \delta_0}}^{K'}$ and $A_{Q_{1 - \delta_0}}^{K'}$ . The base algorithm, corresponding to the pair $Q_{1 - \delta_0}$ and $Q_{1 - \delta_0}'$ satisfies $(\varepsilon_0, 0)$ -DP. This yields + +$$ +\mathrm {D} _ {\lambda} \left(A _ {Q _ {1 - \delta_ {0}}} ^ {K ^ {\prime}} \left\| A _ {Q _ {1 - \delta_ {0}} ^ {\prime}} ^ {K ^ {\prime}} \right.\right) \leq \varepsilon_ {0} + \frac {\log \left(\left(1 - \delta_ {0}\right) \cdot \mu\right)}{\lambda - 1} +$$ + +if $e^{\varepsilon_0} \leq 1 + 1 / (\lambda - 1)$ . Setting $\lambda = 1 + 1 / (e^{\varepsilon_0} - 1)$ and applying monotonicity (Remark 5 and Lemma 10) and we obtain the result. + +# E.1 TRUNCATING THE NUMBER OF REPETITIONS + +Another natural question is what happens if we truncate the distribution of the number of repetitions $K$ . For example, we may have an upper limit on the acceptable runtime. This does not require relaxing to approximate DP, as done by Liu & Talwar (2019). + +Let $K$ be the non-truncated number of repetitions and let $f(x) = \mathbb{E}\left[x^{K}\right]$ be the PGF. Let $m\in \mathbb{N}$ . Let $\tilde{K}$ be the truncated number of repetitions. That is, $\mathbb{P}\left[\tilde{K} = k\right] = \mathbb{I}[K\leq m]\cdot \mathbb{P}[K = k] / \mathbb{P}[K\leq m]$ . Let $\tilde{f} (x) = \mathbb{E}\left[x^{\tilde{K}}\right]$ be the corresponding PGF. + +We have $f'(x) = \sum_{k=1}^{\infty} k \cdot x^{k-1} \cdot \mathbb{P}[K=k]$ and $\tilde{f}'(x) = \sum_{k=1}^{m} k \cdot x^{k-1} \cdot \frac{\mathbb{P}[K=k]}{\mathbb{P}[K \leq m]}$ . Hence, for $x \in [0,1]$ , + +$$ +\begin{array}{l} 0 \leq f ^ {\prime} (x) - \tilde {f} ^ {\prime} (x) \cdot \mathbb {P} [ K \leq m ] \\ = \sum_ {k = m + 1} ^ {\infty} k \cdot x ^ {k - 1} \cdot \mathbb {P} [ K = k ] \\ \leq x ^ {m} \cdot \sum_ {k = m + 1} ^ {\infty} k \cdot \mathbb {P} [ K = k ] \\ = x ^ {m} \cdot \mathbb {E} [ K \cdot \mathbb {I} [ K > m ]). \\ \end{array} +$$ + +Now $\tilde{f}^{\prime}(x)\cdot \mathbb{P}[K\leq m] = \sum_{k = 1}^{m}k\cdot x^{k - 1}\cdot \mathbb{P}[K = k]\geq x^{m - 1}\cdot \mathbb{E}[K\cdot \mathbb{I}[K\leq m]].$ Thus + +$$ +1 \leq \frac {f ^ {\prime} (x)}{\tilde {f} ^ {\prime} (x) \cdot \mathbb {P} [ K \leq m ]} \leq 1 + \frac {x ^ {m} \cdot \mathbb {E} [ K \cdot \mathbb {I} [ K > m ] ]}{x ^ {m - 1} \cdot \mathbb {E} [ K \cdot \mathbb {I} [ K \leq m ] ]} = 1 + x \cdot \frac {\mathbb {E} [ K \cdot \mathbb {I} [ K > m ] ]}{\mathbb {E} [ K ] - \mathbb {E} [ K \cdot \mathbb {I} [ K > m ] ]} +$$ + +Now we can bound the quantity of interest in Lemma 7: For all $q, q' \in [0,1]$ , we have + +$$ +\begin{array}{l} \tilde {f} ^ {\prime} (q) ^ {\lambda} \cdot \tilde {f} ^ {\prime} (q ^ {\prime}) ^ {1 - \lambda} \leq \left(\frac {f ^ {\prime} (q)}{\mathbb {P} [ K \leq m ]}\right) ^ {\lambda} \cdot \left(\frac {f ^ {\prime} (q ^ {\prime})}{\mathbb {P} [ K \leq m ] \cdot \left(1 + \frac {\mathbb {E} [ K \cdot \mathbb {I} [ K > m ] ]}{\mathbb {E} [ K ] - \mathbb {E} [ K \cdot \mathbb {I} [ K > m ] ]}\right)}\right) ^ {1 - \lambda} \\ = f ^ {\prime} (q) ^ {\lambda} \cdot f ^ {\prime} (q ^ {\prime}) ^ {1 - \lambda} \cdot \frac {1}{\mathbb {P} [ K \leq m ]} \cdot \left(1 + \frac {\mathbb {E} [ K \cdot \mathbb {I} [ K > m ] ]}{\mathbb {E} [ K ] - \mathbb {E} [ K \cdot \mathbb {I} [ K > m ] ]}\right) ^ {\lambda - 1} \\ \end{array} +$$ + +This gives us a generic result for truncated distributions: + +Lemma 22 (Generic Bound (cf. Lemma 7) for Truncated distributions). Fix $\lambda > 1$ and $m \in \mathbb{N}$ . Let $K$ be a random variable supported on $\mathbb{N} \cup \{0\}$ . Let $f: [0,1] \to \mathbb{R}$ be the probability generating function of $K$ -i.e., $f(x) := \sum_{k=0}^{\infty} \mathbb{P}[K = k] \cdot x^{k}$ . + +Let $Q$ and $Q'$ be distributions on $\mathcal{Y}$ . Assume $\mathcal{Y}$ is totally ordered. Define a distribution $A$ on $\mathcal{Y}$ as follows. First we sample $\tilde{K}$ which is $K$ conditioned on $K \leq m - i.e. \mathbb{P}\left[\tilde{K} = k\right] = \mathbb{P}\left[K = k|K \leq m\right]$ . + +Then we sample from $Q$ independently $\tilde{K}$ times and output the best of these samples. This output is a sample from $A$ . We define $A'$ analogously with $Q'$ in place of $Q$ . + +Then + +$$ +\mathrm{D}_{\lambda}\left(A\| A^{\prime}\right)\leq \mathrm{D}_{\lambda}\left(Q\| Q^{\prime}\right) + \frac{1}{\lambda - 1}\log \left(f^{\prime}(q)^{\lambda}\cdot f^{\prime}(q^{\prime})^{1 - \lambda}\right) + \frac{\log\left(\frac{1}{1 - \mathbb{P}[K > m]}\right)}{\lambda - 1} +\log \left(1 + \frac{\mathbb{E}\left[K\cdot\mathbb{I}[K > m]\right]}{\mathbb{E}\left[K\right] - \mathbb{E}\left[K\cdot\mathbb{I}[K > m]\right]}\right), +$$ + +where $q$ and $q'$ are probabilities attained by applying the same arbitrary postprocessing to $Q$ and $Q'$ respectively - i.e., there exists a function $g: \mathcal{V} \to [0,1]$ such that $q = \underset{X \leftarrow Q}{\mathbb{E}}[g(X)]$ and $q' = \underset{X' \leftarrow Q'}{\mathbb{E}}[g(X')]$ . + +This will give almost identical bounds to using the non-truncated distribution as long as $\mathbb{P}[K > m] \ll 1$ and $\mathbb{E}[K \cdot \mathbb{I}[K > m]] \ll \mathbb{E}[K]$ , which should hold for large enough $m$ . + +13If $\tilde{K} = 0$ , the output can be arbitrary, as long as it is the same for both $A$ and $A^{\prime}$ \ No newline at end of file diff --git a/hyperparametertuningwithrenyidifferentialprivacy/images.zip b/hyperparametertuningwithrenyidifferentialprivacy/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ffc6578ddef2093dc44c6cce505c124a4769ea11 --- /dev/null +++ b/hyperparametertuningwithrenyidifferentialprivacy/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:93c388a6f4e2f3716b5378f9011171b4c29075a3332c819ee223a281fbc405f0 +size 1229105 diff --git a/hyperparametertuningwithrenyidifferentialprivacy/layout.json b/hyperparametertuningwithrenyidifferentialprivacy/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a226148b95b236629ebedc41db174a56b09a2b40 --- /dev/null +++ b/hyperparametertuningwithrenyidifferentialprivacy/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1e4eb9467e502aba863fb5c4f4bfeec8178615b9c3b426e4e6fbd4e7463e56e +size 1554333 diff --git a/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/d1b57a45-bb8c-43ea-8925-fc859584c02b_content_list.json b/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/d1b57a45-bb8c-43ea-8925-fc859584c02b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..115f4028935a74b69de06c2269d5ccce780988c8 --- /dev/null +++ b/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/d1b57a45-bb8c-43ea-8925-fc859584c02b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:379269268da3ccd1be58d1b1f85ca7c8cb748c74b0cf9eb47462e95a426aea65 +size 200295 diff --git a/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/d1b57a45-bb8c-43ea-8925-fc859584c02b_model.json b/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/d1b57a45-bb8c-43ea-8925-fc859584c02b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6540eb0560335be37bc26020ae5d7da264a4208c --- /dev/null +++ b/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/d1b57a45-bb8c-43ea-8925-fc859584c02b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d2b3d033da656600916d735ba23ef2942d4e21d8c5e3b5b650d2613a240199a7 +size 235109 diff --git a/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/d1b57a45-bb8c-43ea-8925-fc859584c02b_origin.pdf b/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/d1b57a45-bb8c-43ea-8925-fc859584c02b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e72f38e3bcc46ac590171984c5c19e7289f8e19e --- /dev/null +++ b/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/d1b57a45-bb8c-43ea-8925-fc859584c02b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:354f04b0c3726bbac08132bccc32be90d5094ead3aec5cc44d4d2348f14e613e +size 2040150 diff --git a/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/full.md b/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a12e8e9ee3e9e0787bdc2dcb48f2b38d0468c685 --- /dev/null +++ b/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/full.md @@ -0,0 +1,962 @@ +# ILQR-VAE : CONTROL-BASED LEARNING OF INPUT-DRIVEN DYNAMICS WITH APPLICATIONS TO NEURAL DATA + +# Marine Schimel + +Department of Engineering + +University of Cambridge + +Cambridge, UK + +mmc53@cam.ac.uk + +# Ta-Chu Kao + +Gatsby Computational Neuroscience Unit + +University College London + +London, UK + +c.kao@ucl.ac.uk + +# Christopher T. Jensen + +Department of Engineering + +University of Cambridge + +Cambridge, UK + +ktj21@cam.ac.uk + +# Guillaume Hennequin + +Department of Engineering + +University of Cambridge + +Cambridge, UK + +g.hennequin@eng.cam.ac.uk + +# ABSTRACT + +Understanding how neural dynamics give rise to behaviour is one of the most fundamental questions in systems neuroscience. To achieve this, a common approach is to record neural populations in behaving animals, and model these data as emanating from a latent dynamical system whose state trajectories can then be related back to behavioural observations via some form of decoding. As recordings are typically performed in localized circuits that form only a part of the wider implicated network, it is important to simultaneously learn the local dynamics and infer any unobserved external input that might drive them. Here, we introduce iLQR-VAE, a control-based approach to variational inference in nonlinear dynamical systems, capable of learning both latent dynamics, initial conditions, and ongoing external inputs. As in recent deep learning approaches, our method is based on an input-driven sequential variational autoencoder (VAE). The main novelty lies in the use of the powerful iterative linear quadratic regulator algorithm (iLQR) in the recognition model. Optimization of the standard evidence lowerbound requires differentiating through iLQR solutions, which is made possible by recent advances in differentiable control. Importantly, the recognition model is naturally tied to the generative model, greatly reducing the number of free parameters and ensuring high-quality inference throughout the course of learning. Moreover, iLQR can be used to perform inference flexibly on heterogeneous trials of varying lengths. This allows for instance to evaluate the model on a single long trial after training on smaller chunks. We demonstrate the effectiveness of iLQR-VAE on a range of synthetic systems, with autonomous as well as input-driven dynamics. We further apply it to neural and behavioural recordings in non-human primates performing two different reaching tasks, and show that iLQR-VAE yields high-quality kinematic reconstructions from the neural data. + +# 1 INTRODUCTION + +The mammalian brain is a complex, high-dimensional system, containing billions of neurons whose coordinated dynamics ultimately drive behaviour. Identifying and interpreting these dynamics is the focus of a large body of neuroscience research, which is being facilitated by the advent of new experimental techniques that allow large-scale recordings of neural populations (Jun et al., 2017; Stosiek et al., 2003). A range of methods have been developed for learning dynamics from data (Buesing et al., 2012; Gao et al., 2016; Duncker et al., 2019; Archer et al., 2015; Hernandez et al., 2018; She and Wu, 2020; Kim et al., 2021; Nguyen et al., 2020). These methods all specify a + +generative model in the form of a flexible latent dynamical system driven by process noise, coupled with an appropriate observation model. + +Importantly, neural recordings are typically only made in a small selection of brain regions, leaving many areas unobserved which might provide relevant task-related input to the recorded one(s). Yet, the aforementioned methods perform Bayesian inference of state trajectories directly, and therefore do not support inference of external input (which they effectively treat as process noise and marginalize out). Indeed, simultaneous learning of latent dynamics and inference of unobserved control inputs is a challenging, generally degenerate problem that involves teasing apart momentary variations in the data that can be attributed to the system's internal transition function, and those that need to be explained by forcing inputs. This distinction can be achieved by introducing external control in the form of abrupt changes in the latent state transition function, and inferring these switching events (Ghahramani and Hinton, 2000; Linderman et al., 2017). More recently, Pandarinath et al. (2018) introduced LFADS, a sequential variational autoencoder (VAE) that performs inference at the level of external inputs as well as initial latent states. The inferred inputs were shown to be congruent with task-induced perturbations in various reaching tasks in primates (Pandarinath et al., 2018; Keshtkaran and Pandarinath, 2019). Further related work is discussed in Appendix A. + +Here, we introduce iLQR-VAE, a new method for learning input-driven latent dynamics from data. As in LFADS, we use an input-driven sequential VAE to encode observations into a set of initial conditions and external inputs driving an RNN generator. However, while LFADS uses a separate, bidirectional RNN as the encoder, here we substitute the inference network with an optimization-based recognition model that relies on the powerful iterative linear quadratic regulator algorithm (iLQR, Li and Todorov, 2004). iLQR solves an optimization problem that finds a mode of the exact posterior over inputs for the current setting of generative parameters. This ensures that the encoder (mean) remains optimal for every update of the decoder, thus reducing the amortization gap (Cremer et al., 2018). Moreover, having the recognition model be implicitly defined by the generative model stabilizes training, prevents posterior collapse (thus circumventing the need for tricks such as KL warmup), and greatly reduces the number of (hyper-)parameters. + +While iLQR-VAE could find applications in many fields as a general approach to learning stochastic nonlinear dynamical systems, here we focus on neuroscience case studies. We first demonstrate in a series of synthetic examples that iLQR-VAE can learn the dynamics of both autonomous and input-driven systems. Next, we show state-of-the-art performance on monkey M1 population recordings during two types of reaching tasks (O'Doherty et al., 2018; Churchland et al., 2010). In particular, we show that hand kinematics can be accurately decoded from inferred latent state trajectories, and that the inferred inputs are consistent with recently proposed theories of motor preparation. + +# 2 METHOD + +iLQR-VAE models a set of temporal observations, such as behavioural and/or neural recordings, through a shared input-driven nonlinear latent dynamical system (Figure S1). The input encapsulates both process noise (as in traditional latent dynamics models), initial inputs that set the initial condition of the dynamics, and any meaningful task-related control input. In this section, we describe the architecture of the generative model, and the control-based variational inference strategy used for training the model and making predictions. A graphical summary of the model can be found in Appendix B. + +# 2.1 GENERATIVE MODEL + +We consider the following generative model: + +$$ +\text {l a t e n t s t a t e} \quad \boldsymbol {z} _ {t + 1} = f _ {\theta} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}, t\right) \tag {1} +$$ + +$$ +\text {o b s e r v a t i o n s} \quad \boldsymbol {o} _ {t} | \boldsymbol {z} _ {t} \sim p _ {\theta} (\boldsymbol {o} _ {t} | \boldsymbol {z} _ {t}) \tag {2} +$$ + +where $\pmb{u}_t \in \mathbb{R}^m$ , $\pmb{z}_t \in \mathbb{R}^n$ and $\pmb{o}_t \in \mathbb{R}^{n_o}$ are the input, latent state and observations at time $t$ , respectively. Here, observations may comprise either neural activity, behavioural variables, or both - the distinction will be made later where relevant. We use the notation $\theta$ to denote the set of all parameters of the generative model. We use $\pmb{u}_0$ to set the initial condition $\pmb{z}_1 = f_\theta(\pmb{0}, \pmb{u}_0, 0)$ of the + +network1. This way, the latent state trajectory of the network $\mathbf{z}(\mathbf{u}) = \{\mathbf{z}_1, \dots, \mathbf{z}_T\}$ is entirely determined by the input sequence $\mathbf{u} = \{\mathbf{u}_0, \dots, \mathbf{u}_T\}$ and the state transition function $f_{\theta}(\cdot)$ , according to Equation 1. For $f_{\theta}(\cdot)$ , we use either standard linear or GRU-like RNN dynamics (see Appendix C for details). For the likelihoods, we use Gaussian or Poisson distributions with means given by linear or nonlinear readouts of the network state of the form $\bar{o}_t = h(\mathbf{C}\mathbf{z}_t + \mathbf{b})$ (Appendix D). + +We place a Gaussian prior over $\pmb{u}_{t\leq 0}$ . We then consider two alternative choices for the prior over $\pmb{u}_{t > 0}$ . The first is a Gaussian prior + +$$ +p _ {\theta} \left(\boldsymbol {u} _ {t > 0}\right) = \mathcal {N} \left(0, \boldsymbol {S} ^ {2}\right) \tag {3} +$$ + +with $S = \mathrm{diag}(s_1, \ldots, s_m)$ . In many settings however, we expect inputs to enter the system in a sparse manner. To explicitly model this, we introduce a second prior over $\mathbf{u}$ in the form of a heavy-tailed distribution constructed hierarchically by assuming that the $i^{\mathrm{th}}$ input at time $t > 0$ is + +$$ +u _ {i t} = s _ {i} \epsilon_ {i t} \sqrt {\nu / \alpha_ {t}} \tag {4} +$$ + +where $s_i > 0$ is a scale factor, $\epsilon_{it} \sim \mathcal{N}(0,1)$ is independent across $i$ and $t$ , and $\alpha_t \sim \chi_\nu^2$ is a shared scale factor drawn from a chi-squared distribution with $\nu$ degrees of freedom. Thus, inputs are spatially and temporally independent a priori, such that any spatio-temporal structure in the observations will have to be explained by the coupled dynamics of the latent states. Moreover, the heavy-tailed nature of this prior allows for strong inputs when they are needed. Finally, the fact that the scale factor is shared across input dimensions means that inputs are either all weak or potentially all strong at the same time for all input channels, expressing the prior belief that inputs come as shared events. + +This hierarchical construction induces a multivariate Student prior at each time step: + +$$ +p _ {\theta} \left(\boldsymbol {u} _ {t}\right) = \frac {\Gamma [ (\nu + m) / 2 ]}{\Gamma [ \nu / 2 ] (\nu \pi) ^ {m / 2} | \boldsymbol {S} |} \left[ 1 + \frac {1}{\nu} \boldsymbol {u} _ {t} ^ {T} \boldsymbol {S} ^ {- 2} \boldsymbol {u} _ {t} \right] ^ {- (\nu + m) / 2} \tag {5} +$$ + +where $S = \mathrm{diag}(s_1, \ldots, s_m)$ . Note that both $S$ and $\nu$ are parameters of the generative model, which we will learn. + +# 2.2 ILQR-VAE: A NOVEL CONTROL-BASED VARIATIONAL INFERENCE STRATEGY + +To train the model, we optimize $\theta$ to maximize the log-likelihood of observing a collection of independent observation sequences $\mathcal{O} = \{\pmb{o}^{(1)},\dots,\pmb{o}^{(K)}\}$ , or "trials", given by: + +$$ +\log p _ {\theta} (\boldsymbol {\mathcal {O}}) = \sum_ {k = 1} ^ {K} \log \int p _ {\theta} \left(\boldsymbol {o} ^ {(k)} \mid \boldsymbol {z} (\boldsymbol {u})\right) p _ {\theta} (\boldsymbol {u}) d \boldsymbol {u}. \tag {6} +$$ + +As the integral is in general intractable, we resort to a variational inference strategy by introducing a recognition model $q_{\phi}(\boldsymbol{u}| \boldsymbol{o}^{(k)})$ to approximate the posterior $p_{\theta}(\boldsymbol{u}| \boldsymbol{o}^{(k)})$ . Following standard practice (Kingma and Welling, 2013; Rezende et al., 2014), we thus train the model by maximizing the evidence lower-bound (ELBO): + +$$ +\begin{array}{l} \mathcal {L} (\boldsymbol {\mathcal {O}}, \theta , \phi) = \sum_ {k} \mathbb {E} _ {q _ {\phi} (\boldsymbol {u} | \boldsymbol {o} ^ {(k)})} \left[ \log p _ {\theta} (\boldsymbol {o} ^ {(k)} | \boldsymbol {u}) + \log p _ {\theta} (\boldsymbol {u}) - \log q _ {\phi} (\boldsymbol {u} | \boldsymbol {o} ^ {(k)}) \right] (7) \\ = \sum_ {k} \mathbb {E} _ {q _ {\phi} (\boldsymbol {u} | \boldsymbol {o} ^ {(k)})} \left[ \sum_ {t = 1} ^ {T} \log p _ {\theta} \left(\boldsymbol {o} _ {t} ^ {(k)} \mid \boldsymbol {z} _ {t}\right) + \log p _ {\theta} (\boldsymbol {u} _ {t}) - \log q _ {\phi} (\boldsymbol {u} _ {t} | \boldsymbol {o} ^ {(k)}) \right] (8) \\ \leq \log p _ {\theta} (\mathcal {O}). (9) \\ \end{array} +$$ + +with respect to both $\theta$ and $\phi$ . + +Here, the main novelty is the use of an optimization-based recognition model. We reason that maximizing the exact log posterior, i.e. computing + +$$ +\begin{array}{l} \boldsymbol {u} ^ {\star} \left(\boldsymbol {o} ^ {(k)}\right) = \underset {\boldsymbol {u}} {\operatorname {a r g m a x}} \log p _ {\theta} (\boldsymbol {u} | \boldsymbol {o} ^ {(k)}) (10) \\ = \underset {\boldsymbol {u}} {\operatorname {a r g m a x}} \left[ \sum_ {t = 1} ^ {T} \log p _ {\theta} \left(\boldsymbol {o} _ {t} ^ {(k)} | \boldsymbol {u}\right) + \log p _ {\theta} \left(\boldsymbol {u} _ {t}\right) \right] (11) \\ \end{array} +$$ + +subject to the generative dynamics of Equations 1 and 2, is a standard nonlinear control problem: $\log p_{\theta}(\pmb{o}_t^{(k)}|\pmb{u})$ acts as a running cost penalizing momentary deviations between desired outputs $\pmb{o}_t$ and the actual outputs caused by a set of controls $\pmb{u}$ , and $\log p_{\theta}(\pmb{u}_t)$ acts as an energetic cost on those controls. Importantly, there exists a general purpose, efficient algorithm to solve such nonlinear control problems: iLQR (Li and Todorov, 2004; Appendix E). We thus propose to use a black-box iLQR solver to parameterize the mean of the recognition density $q_{\phi}(\pmb{u}|\pmb{0})$ for any $\pmb{o}$ , and to model uncertainty separately using a multivariate Gaussian density common to all trials. Therefore, we parametrize the recognition model as follows: + +$$ +q _ {\phi} (\boldsymbol {u} | \boldsymbol {o}) = \mathcal {N} (\boldsymbol {u}; \boldsymbol {u} ^ {\star} (\boldsymbol {o}), \boldsymbol {\Sigma} _ {\mathrm {s}} \otimes \boldsymbol {\Sigma} _ {\mathrm {t}}) \tag {12} +$$ + +$$ +\text {w i t h} \boldsymbol {u} ^ {*} (\boldsymbol {o}) = \mathrm {i L Q R s o l v e} (\boldsymbol {o}, \theta). \tag {13} +$$ + +where we use a separable posterior covariance (the Kronecker product of a spatial factor $\pmb{\Sigma}_{\mathrm{s}}$ and a temporal factor $\pmb{\Sigma}_{\mathrm{t}}$ ). + +To optimize the ELBO, we estimate the expectation in Equation 8 by drawing samples from $q_{\phi}(\pmb{u}|\pmb{o}^{(k)})$ and using the reparameterization trick (Kingma et al., 2015) to obtain gradients. A major complication that would normally preclude the use of optimization-based recognition models is the need to differentiate through the mean of the posterior. In this case, this involves differentiating through an entire optimization process. Using automatic differentiation within the iLQR solver is in general impractically expensive memory-wise. However, recent advances in differentiable model predictive control enable implicit differentiation through iLQRsolve with a memory cost that does not depend on the number of iterations (Amos et al., 2018; Blondel et al., 2021; Appendix F). + +# 2.3 COMPLEXITY AND IMPLEMENTATION + +We optimize the ELBO using Adam (Kingma and Ba, 2014) with a decaying learning rate $\propto 1 / \sqrt{i}$ where $i$ is the iteration number. Averaging over data samples can be easily parallelized; we do this here using the MPI library and a local CPU cluster. In each iteration and for each data sample, obtaining the approximate posterior mean through iLQR is the main computational bottleneck, with a complexity of $\mathcal{O}(T(n^3 +n^2 n_o))$ . To help mitigate this cost, we find it useful to re-use the previously inferred control inputs to initialize each iLQRsolve. + +# 3 EXPERIMENTS AND RESULTS + +# 3.1 ILQR-VAE ENABLES FAST LEARNING OF DYNAMICS + +Before demonstrating the method on a number of synthetic and real datasets involving ongoing external inputs, we begin with a simpler example meant to illustrate some of iLQR-VAE's main properties (Figure 1). We generated data from an autonomous (i.e. non-input-driven) linear dynamical system ( $n = 8$ latents, $m = 3$ input channels) seeded with a random initial condition in each of 56 trials. The state $\mathbf{z}_t$ was linearly decoded with added Gaussian noise to produce observation data, which we used to train a model in the same class. + +At the beginning of learning, iLQR-VAE originally relies on large ongoing inputs that control the generator into producing outputs very similar to the observations in the data (Figure 1, red box, left), resulting in a rapidly decreasing loss. Subsequently, the amount of input required to fit the observations gradually decreases as the system learns the internal dynamics of the ground truth system. Eventually, the inferred control inputs become confined to the first time bin, i.e. they act as initial conditions for the now autonomous dynamics of the generative model. Thus, iLQR-VAE operates in a regime where the output of the generator explains the data well at all times, and learning + +![](images/607aefd94cb928f9387ae0226837e291f8ee46fb148078e198cf3891ce4c0b53.jpg) +Figure 1: Fast and robust learning in iLQR-VAE. Example run of iLQR-VAE on a synthetic dataset generated by an autonomous linear dynamical. iLQR-VAE with an adaptive prior over control inputs (red) initially uses large inputs to fit the observations, but gradually pushes those inputs back into initial conditions as it acquires the ground truth autonomous dynamics. In contrast, iLQR-VAE with a rigid input prior imposing autonomous dynamics gets stuck in plateaus and learns considerably more slowly (black; see text for details). For each setting, insets show the three inferred inputs for a given test trial (top; posterior mean $\bar{u}|o$ , rescaled by the maximum input in the sequence), and the posterior predictions for the first two corresponding outputs ( $\bar{o}_i|o$ ; black dots: ground truth; blue: posterior mean with $95\%$ c.i.). We also compare learning curves with LFADS (yellow curve). We used the same generator architecture in all scenarios, and the learning rate was hand-tuned for each example. + +![](images/65ad98d93fc39c6deaa7aa9b8e2aadd0effca0047d8204d7abf889d756715e98.jpg) + +![](images/c57e5164d33ff7f8e98e9c6472cf8f11af5cc23037756cabe9c5b564ff7c7f40.jpg) + +consists in making the inputs more parsimonious. We note that this regime is facilitated here by our choice of generator dynamics, which we initialised to be very weak (i.e with a small spectral radius) initially and therefore easily controllable. + +We contrast this with learning in a modified version of iLQR-VAE where we allowed $\pmb{u}_0$ to vary freely (with a Gaussian prior of adjustable variance) but effectively fixed $\pmb{u}_{t > 0}$ to be 0. In other words, we constrained the dynamics of the generator to remain (near-)autonomous throughout learning (Figure 1, grey box, top). Although this incorporates important information about the ground truth generator (which is itself autonomous), counter-intuitively we found that it impairs learning. At the beginning of training, iLQR is unable to find initial conditions that would explain the data well, resulting in a much higher initial loss. The model then gets stuck in plateaus that are seemingly avoided by the free version of iLQR-VAE (see Figure S2 for independent repeats of this experiment). + +On the same toy dataset, we also compared iLQR-VAE to LFADS (Pandarinath et al., 2018), keeping the generative model in the same model class (see Appendix G for details). We found that LFADS learns in a similar manner to iLQR-VAE (Figure 1), also progressively doing away with inputs. + +# 3.2 ILQR-VAE FOR NONLINEAR SYSTEM IDENTIFICATION + +Next, we illustrate the method on an autonomous nonlinear dynamical system, the chaotic Lorenz attractor (Lorenz, 1963; Appendix H). This is a standard benchmark to evaluate system identification methods on nonlinear dynamics (Nguyen et al., 2020; Hernandez et al., 2018; Champion et al., 2019), and one typically considers the dynamics to be learned if the trained model can recreate the whole attractor structure. + +Here, we show that iLQR-VAE can learn these complex nonlinear dynamics. Before training, the inferred inputs are large throughout the trial, and explain the output observations by forcing the internal state of the generator into appropriate trajectories (Figure 2A, top). At the end of learning, the inputs remain confined to the first time bin, setting the initial condition of the trajectories which are now driven by the stronger, near-autonomous dynamics of the generator. In Figure 2B we show that, conditioned on an initial bout of test data, the model perfectly predicts the rest of the trajectory. Moreover, starting from a random initial condition, the model can recreate the whole attractor structure (Figure 2C). + +![](images/91eab135ab1901192c91fc7b4372475afb99df8c2b79d3dccd675b0b60e01dbf.jpg) +Figure 2: Learning nonlinear autonomous dynamics. (A) Evolution of the negative ELBO during training of an iLQR-VAE model with a nonlinear RNN and a Gaussian likelihood ( $n = 20$ , $m = 5$ , $n_o = 3$ ). The inputs inferred by iLQR are originally strong (blue, before learning), but are progressively pushed to initial conditions (red, after learning) as the autonomous dynamics of the Lorenz attractor are approximated with increasing accuracy. (B) Four example test trajectories (columns), conditioning on the noisy data within the first half (green shading), and predicting in the second half. (C) Single long autonomous trajectory after training (setting $u_{t > 0} = 0$ ), starting from a random initial condition and running the dynamics for 10000 steps. The model displays the butterfly topology characteristic of the Lorenz system, and completes multiple cycles without deviating from the attractor, suggesting that the ground-truth dynamics have been learned. + +![](images/3d92834a20c910e8cc460d4cd8781379df8becaa29a104c00602c0dc61536e2f.jpg) + +![](images/ee0acfd02ff9830373e96f7998076aec8bed05eab75abd824a6716416a5cc908.jpg) + +To quantitatively assess how well the dynamics have been learned, we computed the $k$ -step coefficient of determination, $R_{k}^{2}$ , as in Hernandez et al. (2018). This metric evaluates how well the model can predict the true state $k$ steps into the future, starting from any state inferred along a test trajectory (see Appendix H for details). Hernandez et al. reported $R_{30}^{2} \approx 1$ but did not show results for larger $k$ . For iLQR-VAE, $R_{30}^{2} = 0.998$ and the forward interpolation was still very high at 50 time steps, with $R_{50}^{2} = 0.996$ . + +# 3.3 INFERRING SPARSE INPUTS + +To demonstrate iLQR-VAE's ability to infer unobserved inputs and learn the ground truth dynamics of an input-driven system, we generated synthetic data from a system with $n = 3$ , $m = 3$ and $n_{o} = 10$ , which evolves with linear dynamics for $T = 1000$ time steps (see Appendix I for an example with input-driven nonlinear dynamics). The system was driven by sparse inputs, and the output corrupted with Gaussian noise. Input events were drawn in each time step from a Bernoulli distribution with mean $p = 0.03$ . Whenever an input event occurred, the magnitude of inputs in each channel was drawn from a standard multivariate Gaussian distribution. + +We fit both iLQR-VAE and LFADS models to these data, choosing the generator to be within the ground-truth model class for both. iLQR-VAE captured most of the variance in the inputs (Figure 3A; $R^2 = 0.94 \pm 0.02$ ; 5 random seeds), and recovered the eigenvalue spectrum of the transition matrix almost perfectly (Figure 3B). LFADS however performed poorly on this example ( $R^2 = 0.05 \pm 0.02$ for input reconstruction; 3 random seeds), as well as in several other similar comparisons on datasets of different sizes and trial numbers (Appendix J). This is unsurprising, as LFADS assumes a dense (auto-regressive) Gaussian prior over the inputs, which is not overridden by the relatively small amount of data used here. Note however that when applied to a set of 56 trials of 100 time steps driven by Gaussian autoregressive inputs, iLQR-VAE still captured the structure in the inputs more accurately than LFADS did ( $R^2 = 0.81 \pm 0.01$ vs. $0.29 \pm 0.06$ ). We hypothesize that this reflects the difficulty of learning a good recognition model from a small amount of data. We evaluate the effect of the choice of prior more extensively in Appendix K. + +# 3.4 PREDICTING HAND KINEMATICS FROM PRIMATE M1 RECORDINGS + +# 3.4.1 TRIAL-STRUCTURED MAZE TASK + +To highlight the utility of iLQR-VAE for analyzing experimental neuroscience data, we next applied it to recordings of monkey motor (M1) and dorsal premotor (PMd) cortices during a delayed reaching task ('Maze' dataset of Kaufman et al., 2016; DANDI 000128). This dataset contains 108 differ + +![](images/83d6a82cf8ba79ca3cc28ba24e5091006781e2929905ced447368202d375a4e4.jpg) +Figure 3: Inferring sparse inputs to a linear system. (A) Top: example observations (black dots) and inferred posterior mean (blue line). Bottom: true and inferred inputs. iLQR-VAE is able to infer the timing and magnitude of the inputs almost perfectly, despite being trained on only a single timeseries of 1000 time steps. Note that iLQR-VAE fails to infer one of the smallest inputs, whose effect on the observations is largely masked by observation noise. (B) Comparison of the true (black) and learned (blue) eigenvalue spectra. This shows that iLQR-VAE recovers the ground-truth dynamical system up to a similarity transformation. + +![](images/522fdcce6ec6e503a9b31d87b158fe0acc5b9a341d17a08d8c32f9a196d4444b.jpg) + +ent reach configurations over nearly 3000 trials, and has recently been proposed as a neuroscience benchmark for neural data analysis methods (Pei et al., 2021). We compared the performance of iLQR-VAE to several other latent variable models, evaluated on this dataset in Pei et al. (2021). + +Consistent with previous findings (Pandarinath et al., 2018), iLQR-VAE inferred inputs that were confined to initial conditions, from which smooth single-trial dynamics evolved near-autonomously (Figure 4A). As a first measure of performance, we evaluated the models on "co-smoothing", i.e. the ability to predict the activity of held-out neurons conditioned on a set of held-in neurons (see Appendix L for details). Conditioning of 137 neurons (i.e. using 45 held-out neurons), we obtained a co-smoothing of $0.331 \pm 0.001$ (over 5 random seeds). For comparison, Pei et al. (2021) reports 0.187 for GPFA (Yu et al., 2009), 0.225 for SLDS (Linderman et al., 2017), 0.329 for Neural Data Transformers (Ye and Pandarinath, 2021) and $R^2 = 0.346$ for AutoLFADS (LFADS with large scale hyperparameter optimization; Keshtkaran et al., 2021) on the same dataset. + +Next, we assessed how well hand velocity could be decoded from neural activity – another metric of interest to neuroscientists. We applied ridge regression to predict the monkey's hand velocity (with a $100\mathrm{ms}$ lag) from momentary neuronal firing rates (mean of the posterior predictive distribution) on test data. This reconstruction could be performed with very high accuracy $R^2 = 0.896 \pm 0.002$ (over 5 random seeds), compared to 0.640 for GPFA, 0.775 for SLDS, 0.897 for Neural Data Transformers and 0.907 for AutoLFADS (Pei et al., 2021). These experiments place iLQR-VAE on par with state-of-the-art methods, without any extensive hyperparameter optimization. + +# 3.4.2 CONTINUOUS REACHING TASK + +While a large number of neuroscience studies perform neural and behavioural recordings during trial-structured tasks, much can be learned by analyzing the dynamics of more naturalistic, less constrained behaviours. iLQR-VAE's flexible recognition model is well-suited to the analysis of such less structured tasks, as it can easily be trained and tested on trials of heterogeneous lengths. To illustrate this, we applied iLQR-VAE to a self-paced reaching task during which a monkey had to reach to consecutive targets randomly sampled from a $17 \times 8$ grid on a screen (O'Doherty et al., 2018; Makin et al., 2018). This dataset consists of both neural recordings from primary motor cortex (M1) together with continuous behavioural recordings in the form of $x$ - and $y$ -velocities of the fingertip. + +In this example, we experimented with fitting the spike trains and hand velocities jointly (combining a Poisson likelihood for the 130 neurons and a Gaussian likelihood for the 2 kinematic variables, see Appendix M for further details). We found that it allowed iLQR-VAE to reach a similar kinematic decoding performance as when fitting neural activity alone, but using a smaller network. More generally, we reason that a natural approach to making behavioural predictions from neural data using a probabilistic generator is to fit it to both jointly, and then use the posterior predictive distribution + +![](images/c549371bbd56250561a47dcdc4cf645f47ec24c3936381a8e5f280cfe7f37780.jpg) +Figure 4: iLQR-VAE can be used to decode kinematics from neural data, and learns dynamics relying strongly on preparatory inputs. (A) 50 example hand trajectories (top) from the monkey reaching 'Maze' dataset, corresponding single-trial firing rate timecourse (middle; one example neuron), and inferred input (bottom; one example input dimension). (B) Mean (thick) and single-'trial' (thin) firing rates of two example neurons during reaches of various directions (colours), aligned to target onset. Note that three single-'trial' firing rates are shown for only three of the 8 reach directions for which averages are shown. Interestingly, single 'trial' activities evolve tightly around their trial averages, and resembles the firing rate responses shown in A (middle). (C) Example spike raster (top) and hand kinematics (bottom) for a 4 second-long chunk of test data in the continuous monkey reaching task. $v_{x}$ and $v_{y}$ refer to hand x- and y-velocities respectively. (D) Overall magnitude of the inferred inputs $\| \pmb{u}_t\|$ (blue), average population spiking activity (red), and hand velocity $\| \pmb{y}_t\|$ (black), each z-scored, averaged across movement episodes, and aligned to target onset (top) or movement onset (bottom). + +![](images/3837252bd6c6e8ef9dd2b4a2977a186b5196f9adb12a23cdbb4b8dcc31bccce9.jpg) + +![](images/732172d15988e4f80fc70c91df4abb3dd706ae1ae3193ba8a3fda8d3a1fc88b6.jpg) + +![](images/0a441084af4c0003e9e98398674ecc20c90e36feb5dbd89bf362e908afb7953c.jpg) + +over behavioural variables (conditioning on spike trains only) as a nonlinear decoder. In future work, this could provide more accurate predictions in those motor tasks where linear regression struggles (see e.g. Schroeder et al., 2022). + +For our analyses, we used the first $\sim 22$ minutes of a single recording session (indy_20160426), excluded neurons with overall firing rates below $2\mathrm{Hz}$ , and binned data at $25\mathrm{ms}$ resolution. Although it is not a formal requirement of our method, we chunked the data into 336 non-overlapping pseudotrials of $4\mathrm{s}$ each, in order to enable parallelization of the ELBO computation during training. We only trained the model on a random subset of 168 trials. + +To highlight the flexibility of iLQR as a recognition model, we then evaluated the model by performing inference on the first 9 minutes of the data, as a single long chunk of observations. Note that this is not generally possible in LFADS or any sequential VAE where an encoder RNN has been trained exclusively on trials of the same fixed length. Despite the lack of trial structure, we found that neurons display a stereotyped firing pattern across multiple instances of each reach. This was revealed by binning the angular space into 8 reach directions, temporally segmenting and grouping the inferred firing rates according to the momentary reach direction, and aligning these segments to the time of target onset (Figure 4B). Moreover, hand kinematics could be linearly decoded from the inferred firing rates with high accuracy (Figure 4C; $R^2 = 0.75 \pm 0.01$ over 5 random seeds), on-par with AutoLFADS ( $R^2 = 0.76$ ; Keshtkaran et al., 2021), and considerably higher than GPFA and related approaches ( $R^2 = 0.6$ ; Jensen et al., 2021). + +We next wondered whether we could use iLQR-VAE to address an open question in motor neuroscience, namely the extent to which the peri-movement dynamics of the motor cortex rely on external inputs (possibly from other brain areas). Such inputs could arise during movement preparation, execution, neither, or both. We thus examined the relationship between the inputs inferred by iLQR-VAE and the concurrent evolution of the neuronal firing rates and hand kinematics. Overall, neuronal activity tends to rise rapidly starting 150 ms before movement onset (Figure 4D, red), consistent with the literature (Shenoy et al., 2013; Churchland et al., 2012). Interestingly, we observed + +that inputs tend to arise much earlier (around the time of target onset), and start decaying well before the mean neural activity has finished rising (Figure 4D top), about $150~\mathrm{ms}$ before the hand started to move (Figure 4D, bottom). While these results must be interpreted cautiously, as inference was performed using information from the whole duration of the trial (i.e. using iLQR as a smoother), they show that the data is best explained by large inputs prior to movement onset, rather than during movement itself. Interestingly, the timing of these inputs is globally consistent with target-induced visual inputs driving preparatory activity in M1, whose dynamics then evolve in a more autonomous manner to drive subsequent motion. + +# 4 DISCUSSION + +# LIMITATIONS AND FUTURE WORK + +While we have demonstrated that iLQR-VAE performs well on various toy and real datasets, the method has a number of limitations, some of which could be addressed in future work. Firstly, the problem of decoupling ongoing inputs from dynamics is degenerate in general, and there is no guarantee that iLQR-VAE will always successfully identify the ground-truth. This problem will be exacerbated in the low data regime, or if there is a large mismatch between our prior over inputs and the true input distribution. While further generalization tests such as extrapolations can be used to assess post-hoc how well the dynamics have been learned, the lack of identifiability will often make interpretation of the model parameters difficult. Secondly, using iLQR as a way of solving maximum a posteriori inference in state space models comes at a high computational cost, and with the risk that iLQR may converge to a local minimum. We note that both these issues could potentially be tackled at once if process noise in the generator was modelled separately from control inputs, as the MAP estimation problem could then be solved using some of the highly efficient algorithms available in the framework of linearly solvable stochastic control (Todorov, 2009; Dvijotham and Todorov, 2013; Kappen, 2005). Finally, for simplicity we modelled posterior input uncertainty using a common covariance across all data samples. This might be limiting, for example when modelling neural populations that exhibit coordinated global firing fluctuations giving rise to data samples with highly variable information content. A better solution, left to future work, would be to amortize the computation of the posterior uncertainty by reusing some of the computations performed in iLQR. + +# CONCLUSION + +The rise of new tools and software now makes it possible to record from thousands of neurons while monitoring behaviour in great detail (Jun et al., 2017; Mathis et al., 2018; Musk et al., 2019). These datasets create unique opportunities for understanding the brain dynamics that underlie neural and behavioural observations. However, identifying complex dynamical systems is a hard nonlinear filtering and learning problem that calls for new computational techniques (Kutschireiter et al., 2020). Here, we exploited the duality between control and inference (Toussaint, 2009; Kappen and Ruiz, 2016; Levine, 2018; Appendix N) to bring efficient algorithms for nonlinear control to bear on learning and inference in nonlinear state space models. + +The method we proposed uses iLQR, a powerful general purpose nonlinear controller, to perform inference over inputs in an RNN-based generative model. Using an optimization-based recognition model such as iLQR has two advantages. First, it brings important flexibility at test time, enabling predictions on arbitrary, heterogeneous sequences of observations as well as seamless handling of missing data. Second, owing to parameter sharing between the generative and recognition models, the ELBO gap is reduced (Appendix O), making learning more robust (in particular, to initialization) and reducing the number of hyperparameters to tune. With the advent of automatically differentiable optimizers (Blondel et al., 2021), we therefore hope that optimization-based recognition models will open up new avenues for VAEs. + +# ACKNOWLEDGEMENTS + +We thank Jasmine Stone and Javier Antorán for helpful comments on the manuscript, and Anil Madhavapeddy for providing computing resources at early stages. M.S. was funded by an EPSRC DTP studentship and K.T.J. was funded by a Gates Cambridge scholarship. This work was performed + +using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service (http://www.hpc.cam.ac.uk) funded by EPSRC Tier-2 capital grant EP/P020259/1. + +# REFERENCES + +Amos, B., Rodriguez, I. D. J., Sacks, J., Boots, B., and Kolter, J. Z. (2018). Differentiable MPC for end-to-end planning and control. arXiv preprint arXiv:1810.13400. +Archer, E., Park, I. M., Buesing, L., Cunningham, J., and Paninski, L. (2015). Black box variational inference for state space models. arXiv preprint arXiv:1511.07367. +Blondel, M., Berthet, Q., Cuturi, M., Frostig, R., Hoyer, S., Llinares-López, F., Pedregosa, F., and Vert, J.-P. (2021). Efficient and modular implicit differentiation. arXiv preprint arXiv:2105.15183. +Buesing, L., Macke, J. H., and Sahani, M. (2012). Learning stable, regularised latent models of neural population dynamics. Network: Computation in Neural Systems, 23:24-47. +Champion, K., Lusch, B., Kutz, J. N., and Brunton, S. L. (2019). Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences, 116(45):22445-22451. +Churchland, M. M., Cunningham, J. P., Kaufman, M. T., Foster, J. D., Nuyujukian, P., Ryu, S. I., and Shenoy, K. V. (2012). Neural population dynamics during reaching. Nature, 487:51-56. +Churchland, M. M., Cunningham, J. P., Kaufman, M. T., Ryu, S. I., and Shenoy, K. V. (2010). Cortical preparatory activity: representation of movement or first cog in a dynamical machine? Neuron, 68(3):387-400. +Cremer, C., Li, X., and Duvenaud, D. (2018). Inference suboptimality in variational autoencoders. In International Conference on Machine Learning, pages 1078-1086. PMLR. +Duncker, L., Bohner, G., Boussard, J., and Sahani, M. (2019). Learning interpretable continuous-time models of latent stochastic dynamical systems. In International Conference on Machine Learning, pages 1726-1734. +Dvijotham, K. and Todorov, E. (2013). Linearly solvable optimal control. In Reinforcement learning and approximate dynamic programming for feedback control, volume 17, pages 119-141. Wiley Online Library. +Gao, Y., Archer, E., Paninski, L., and Cunningham, J. P. (2016). Linear dynamical neural population models through nonlinear embeddings. arXiv preprint arXiv:1605.08454. +Ghahramani, Z. and Hinton, G. E. (2000). Variational learning for switching state-space models. Neural comput., 12:831-864. +Hernandez, D., Moretti, A. K., Wei, Z., Saxena, S., Cunningham, J., and Paninski, L. (2018). Nonlinear evolution via spatially-dependent linear dynamics for electrophysiology and calcium data. arXiv preprint arXiv:1811.02459. +Jensen, K. T., Kao, T.-C., Stone, J. T., and Hennequin, G. (2021). Scalable Bayesian GPFA with automatic relevance determination and discrete noise models. Advances in Neural Information Processing Systems, 34. +Jun, J. J., Steinmetz, N. A., Siegle, J. H., Denman, D. J., Bauza, M., Barbarits, B., Lee, A. K., Anastassiou, C. A., Andrei, A., Aydin, C., et al. (2017). Fully integrated silicon probes for high-density recording of neural activity. Nature, 551(7679):232-236. +Kappen, H. J. (2005). Linear theory for control of nonlinear stochastic systems. Phys. Rev. Lett., 95:200201. +Kappen, H. J. and Ruiz, H. C. (2016). Adaptive importance sampling for control and inference. Journal of Statistical Physics, 162:1244-1266. + +Kaufman, M. T., Seely, J. S., Sussillo, D., Ryu, S. I., Shenoy, K. V., and Churchland, M. M. (2016). The largest response component in the motor cortex reflects movement timing but not movement type. eNeuro, 3(4). +Keshtkaran, M. R. and Pandarinath, C. (2019). Enabling hyperparameter optimization in sequential autoencoders for spiking neural data. arXiv preprint arXiv:1908.07896. +Keshtkaran, M. R., Sedler, A. R., Chowdhury, R. H., Tandon, R., Basrai, D., Nguyen, S. L., Sohn, H., Jazayeri, M., Miller, L. E., and Pandarinath, C. (2021). A large-scale neural network training framework for generalized estimation of single-trial population dynamics. bioRxiv. +Kim, T. D., Luo, T. Z., Pillow, J. W., and Brody, C. (2021). Inferring latent dynamics underlying neural population activity via neural differential equations. In Meila, M. and Zhang, T., editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 5551-5561. PMLR. +Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Kingma, D. P., Salimans, T., and Welling, M. (2015). Variational dropout and the local reparameterization trick. arXiv preprint arXiv:1506.02557. +Kingma, D. P. and Welling, M. (2013). Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114. +Kutschireiter, A., Surace, S. C., and Pfister, J.-P. (2020). The hitchhiker's guide to nonlinear filtering. Journal of Mathematical Psychology, 94:102307. +Levine, S. (2018). Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv preprint arXiv:1805.00909. +Li, W. and Todorov, E. (2004). Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), pages 222-229. +Linderman, S., Johnson, M., Miller, A., Adams, R., Blei, D., and Paninski, L. (2017). Bayesian learning and inference in recurrent switching linear dynamical systems. In Artificial Intelligence and Statistics, pages 914-922. PMLR. +Lorenz, E. N. (1963). Deterministic nonperiodic flow. Journal of atmospheric sciences, 20(2):130-141. +Makin, J. G., O'Doherty, J. E., Cardoso, M. M., and Sabes, P. N. (2018). Superior arm-movement decoding from cortex with a new, unsupervised-learning algorithm. Journal of neural engineering, 15(2):026010. +Mathis, A., Mamidanna, P., Cury, K. M., Abe, T., Murthy, V. N., Mathis, M. W., and Bethge, M. (2018). DeepLabCut: markerless pose estimation of user-defined body parts with deep learning. Nature neuroscience, 21(9):1281-1289. +Musk, E. et al. (2019). An integrated brain-machine interface platform with thousands of channels. Journal of medical Internet research, 21:e16194. +Nguyen, D., Ouala, S., Drumetz, L., and Fablet, R. (2020). Variational deep learning for the identification and reconstruction of chaotic and stochastic dynamical systems from noisy and partial observations. arXiv preprint arXiv:2009.02296. +O'Doherty, J. E., Cardoso, M. M. B., Makin, J. G., and Sabes, P. N. (2018). Nonhuman Primate Reaching with Multichannel Sensorimotor Cortex Electrophysiology: broadband for indy_20160630_01. This research was supported by the Congressionally Directed Medical Research Program (W81XWH-14-1-0510). JEO was supported by fellowship #2978 from the Paralyzed Veterans of America. JGM was supported by a fellowship from the Swartz Foundation. +Pandarinath, C., O'Shea, D. J., Collins, J., Jozefowicz, R., Stavisky, S. D., Kao, J. C., Trautmann, E. M., Kaufman, M. T., Ryu, S. I., Hochberg, L. R., et al. (2018). Inferring single-trial neural population dynamics using sequential auto-encoders. Nature methods, 15(10):805-815. + +Pei, F., Ye, J., Zoltowski, D., Wu, A., Chowdhury, R. H., Sohn, H., O'Doherty, J. E., Shenoy, K. V., Kaufman, M. T., Churchland, M., et al. (2021). Neural latents benchmark'21: Evaluating latent variable models of neural population activity. arXiv preprint arXiv:2109.04463. +Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. In International conference on machine learning, pages 1278-1286. PMLR. +Schroeder, K. E., Perkins, S. M., Wang, Q., and Churchland, M. M. (2022). Cortical control of virtual self-motion using task-specific subspaces. Journal of Neuroscience, 42(2):220-239. +She, Q. and Wu, A. (2020). Neural dynamics discovery via Gaussian process recurrent neural networks. In Uncertainty in Artificial Intelligence, pages 454-464. PMLR. +Shenoy, K. V., Sahani, M., and Churchland, M. M. (2013). Cortical control of arm movements: a dynamical systems perspective. Annu. Rev. Neurosci., 36:337-359. +Stosiek, C., Garaschuk, O., Holthoff, K., and Konnerth, A. (2003). In vivo two-photon calcium imaging of neuronal networks. Proceedings of the National Academy of Sciences, 100(12):7319-7324. +Todorov, E. (2009). Efficient computation of optimal actions. Proc. Natl. Acad. Sci., 106:11478-11483. +Toussaint, M. (2009). Robot trajectory optimization using approximate inference. In Proceedings of the 26th annual international conference on machine learning, pages 1049-1056. +Ye, J. and Pandarinath, C. (2021). Representation learning for neural population activity with neural data transformers. bioRxiv. +Yu, Byron, M., Cunningham, J. P., Santhanam, G., Ryu, S. I., Shenoy, K. V., and Sahani, M. (2009). Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. In Advances in neural information processing systems, pages 1881-1888. + +# Appendix + +# A ADDITIONAL RELATED WORK + +In this section, we first discuss (non-exhaustively) several methods used for identifying dynamical systems from data, before presenting the few approaches we are aware of that explicitly tackle the problem of inferring unobserved control inputs to those systems. + +The problem of identifying the dynamics giving rise to a set of observations is one that spans many fields, from climate modelling to neuroscience, and a variety of methods have therefore been developed to tackle it. Most existing approaches assume non-driven dynamics, as this greatly facilitates systems identification. + +One common modelling paradigm is to assume the data arises from a latent linear dynamical system (LDS), which parameters can be learned using an Expectation-Maximization (EM) approach Ghahramani and Hinton (1996). While linear models are typically very efficient as they allow estimates to be computed in closed-form, they severely restrict the range of dynamics that can be approximated. Various extensions have been proposed, such as switching linear dynamical systems (Linderman et al., 2017; Ghahramani and Hinton, 2000), which assume that the data can be modelled using several latent dynamical systems with a Hidden Markov Model controlling the transitions between those. Alternatively, Costa et al. (2019) proposes to use adaptative locally linear dynamics and uses an iterative procedure to find the most likely switching points. In a similar vein, Hernandez et al. (2018) approximates the dynamics as locally linear; interestingly, the proposed method (VIND) incorporates the generative dynamics in the approximate posterior distribution over latent trajectories given data. This is reminiscent of the approach taken in iLQR-VAE, where the recognition parameters are kept tied to the generative parameters. + +Another way to keep the problem solvable while allowing for richer dynamics is to approximate those using a linear combination of nonlinear basis functions. This then turns the optimization into the more simple problem of learning the weights of the expansion (with the caveat that one needs to choose the set of basis functions). This is the method used in Brunton et al. (2016b), with an additional constraint that the coefficients are sparse in the space of basis functions, yielding a more interpretable model. This was later extended in Champion et al. (2019) to allow for automatic discovery of a set coordinates in which the dynamics can be approximated as sparse. + +In a similar manner, a popular approach involves modelling the dynamics as linear in the space of observables (which can include linear or nonlinear mappings from the state of the system), as is done in dynamic mode decomposition Schmid (2010); Kutz et al. (2016) (see Brunton et al. (2016a) for applications to neural data). This approach is closely related to Koopman operative theory, which finds a set of dynamic modes and uses those to approximate the data as a single linear dynamical system. + +Finally, the dynamics can be modelled using nonlinear neural networks, and the parameters learned using variational methods (see e.g. Nguyen et al., 2020; Hernandez et al., 2018; Koppe et al., 2019). + +Most of the aforementioned models can be extended to incorporate known external inputs coming into the system. This is for instance done in dynamic mode decomposition with control inputs (DMDc; Proctor et al., 2016), which can be generalized into Koopman operators with inputs and control (KIC; Proctor et al., 2018). + +On the other hand, the range of methods modelling dynamics driven by unobserved inputs (which must thus be inferred) is a lot more limited. Indeed, LFADS (Pandarinath et al., 2018) is the first method we are aware of which explicitly models the set of control inputs driving the system. As described in the main text, LFADS models the dynamics as a (potentially) input-driven nonlinear dynamical system, and learns both the parameters and the inputs. More recently, Fieseler et al. (2020) proposed an extension of DMDc to handle unsupervised learning of unobserved signals as well as estimation of the dynamics. This was then used this to successfully model neural recordings made in C. elegans. Crucially however, the dynamics were modelled as linear, thus restricting the range of dynamics that the learnt system could generate. Morrison et al. (2020) modelled the same data using input-driven nonlinear dynamics, but assumed a limited subset of inputs driving + +transitions at given time points, and thus only learned the magnitude of those inputs and not their timing. + +Finally, an approach related to the modelling of unobserved inputs (which give rise to changes that cannot be explained by the dynamics alone) is the explicit modelling of events which lead to discontinuities in the dynamics. This is done in Chen et al. (2020) within the framework of neural ordinary differential equations (Chen et al., 2018). To some extent, one can also view switching dynamical models as inferring unobserved inputs giving rise to state transitions, although those "inputs" are restricted to live in a discrete subspace. + +![](images/2fd47bd603e0b975e0dca6b6e34fb4f60dabdccb8480ae9953801e5dbae247ee.jpg) +B GRAPHICAL SUMMARY OF THE MODEL +Figure S1: Illustration of the model iLQR-VAE is trained to model a set of noisy observations (here, spike trains). During each training iteration, iLQR is used to infer the input $\pmb{u}$ (green), given observations and the current parameters of the generator. Intuitively, the optimal inputs are ones that produce latent trajectories in the RNN that are most compatible with the data, without overfitting. Predictions are generated by running the dynamics forward, conditioned on a given input and set of parameters. + +# C IMPLEMENTATION OF THE DYNAMICS + +We considered different functional forms for the discrete-time dynamics of the latent state. In the following, $\pmb{z}_t$ and $\pmb{u}_t$ denote the latent state and an external input at time $t$ , respectively. + +# C.1 LINEAR DYNAMICS + +The simplest case considered is that of linear dynamics: + +$$ +\boldsymbol {z} _ {t + 1} = \boldsymbol {A} \boldsymbol {z} _ {t} + \boldsymbol {B} \boldsymbol {u} _ {t} \tag {S1} +$$ + +One issue with linear dynamics is that they may become unstable, such that repeated application of the operator $\mathbf{A}$ will lead to a divergence of $\| \mathbf{z} \|$ and the associated gradients. This can become problematic, especially when modelling long sequences of observations. To circumvent this issue, we used a parametrization of the propagator $\mathbf{A}$ that ensured it remained stable at all times. To find a stable linear parametrization, we considered the Lyapunov stability condition (Bhatia and Szegö, 2002). The discrete time dynamics of Equation S1 are asymptotically stable if and only if $\mathbf{A}$ satisfies + +$$ +\boldsymbol {P} - \boldsymbol {A} \boldsymbol {P} \boldsymbol {A} ^ {T} = \boldsymbol {I} \tag {S2} +$$ + +for some positive definite matrix $\pmb{P}$ with eigenvalues $\geq 1$ . It is easy to verify that the following parameterization of the state matrix $\pmb{A}$ satisfies this criterion: + +$$ +\boldsymbol {A} = \boldsymbol {U} \boldsymbol {D} ^ {1 / 2} \boldsymbol {Q} (\boldsymbol {D} + \boldsymbol {I}) ^ {- 1 / 2} \boldsymbol {U} ^ {T} \tag {S3} +$$ + +with $\mathbf{U}$ and $Q$ arbitrary unitary matrices, and $D$ an arbitrary non-negative diagonal matrix. Conversely, any stable matrix can be reached by this parameterization. Note that the matrix $P$ that satisfies Equation S2 is then given by $\mathbf{P} = \mathbf{U}(\mathbf{D} + \mathbf{I})\mathbf{U}^T$ . Finally, as we are also learning the $B$ and $C$ matrices in Equation S1, we can without loss of generality set $\mathbf{U} = \mathbf{I}$ . + +# C.2 GRUDYNAMICS + +To fit the monkey reaching data as well as the Lorenz attractor, we chose the dynamical system to be Minimal Gated Unit (MGU). More specifically, we used the MGU2 variant of the MGU proposed in Heck and Salem (2017): + +$$ +\boldsymbol {f} _ {t} = \sigma \left(\boldsymbol {U} _ {f} \boldsymbol {z} _ {t - 1}\right) \tag {S4} +$$ + +$$ +\hat {\boldsymbol {z}} _ {t} = g \left(\boldsymbol {U} _ {h} \left(\boldsymbol {f} _ {t} \odot \boldsymbol {z} _ {t - 1}\right) + \boldsymbol {W} \boldsymbol {x} _ {t} + \boldsymbol {b} _ {h}\right) \tag {S5} +$$ + +$$ +\boldsymbol {z} _ {t} = \left(1 - \boldsymbol {f} _ {t}\right) \odot \boldsymbol {z} _ {t - 1} + \boldsymbol {f} _ {t} \odot \hat {\boldsymbol {z}} _ {t - 1} \tag {S6} +$$ + +where $\boldsymbol{x}_t = \boldsymbol{B}\boldsymbol{u}_t$ denotes the input entering the dynamical system. Note that the latent state $z$ is often denoted by $h$ is the literature. We found that the MGU2 gave better and more stable performance than the MGU. We hypothesize that this is due to the input entering the system in the update gate only (as opposed to entering it through both forget and update gates), thus making the system more easily controllable. We chose $\sigma(\cdot)$ to be a sigmoid function, and $g(\cdot)$ to be a soft ReLu-like nonlinearity, + +$$ +g (x) = \frac {x + \sqrt {x ^ {2} + 4}}{2} - 1. \tag {S7} +$$ + +# D LIKELIHOOD FUNCTIONS + +The likelihood of the observations appears both in the ELBO and in the iLQR cost. Minimization of the latter via iLQR requires computing the momentary Jacobians and Hessians of the likelihood function w.r.t. the internal state $\boldsymbol{z}_t$ . Although these quantities can be obtained generically via automatic differentiation, iLQR is always faster when they are provided directly (Appendix E), which we did here using the analytical expressions given below. + +# D.1 GAUSSIAN LIKELIHOOD + +For the Gaussian likelihood, we assume observations $\pmb{o}$ are linearly decoded from latents $z$ and corrupted with Gaussian noise, such that $o \sim \mathcal{N}(Cz + b, \Sigma)$ , with $C$ the readout matrix, $b$ a vector of biases, and $\Sigma$ a diagonal matrix of variances. This yields the following log-likelihood function: + +$$ +\log P \left(\boldsymbol {o} _ {t} \mid \boldsymbol {z}\right) = - \frac {1}{2} \left[ \left(\boldsymbol {C} \boldsymbol {z} _ {t} + \boldsymbol {b} - \boldsymbol {o} _ {t}\right) ^ {T} \boldsymbol {\Sigma} ^ {- 1} \left(\boldsymbol {C} \boldsymbol {z} _ {t} + \boldsymbol {b} - \boldsymbol {o} _ {t}\right) + n _ {o} \log (2 \pi) + \sum_ {i} \log \Sigma_ {i i} \right] \tag {S8} +$$ + +The Jacobian of this expression is as follows : + +$$ +\frac {\partial \log P \left(\boldsymbol {o} _ {t} \mid \boldsymbol {z} _ {t}\right)}{\partial \boldsymbol {z} _ {t}} = - \boldsymbol {C} ^ {T} \boldsymbol {\Sigma} ^ {- 1} \left(\boldsymbol {C} \boldsymbol {z} _ {t} + \boldsymbol {b} - \boldsymbol {o}\right) \tag {S9} +$$ + +Finally, the Hessian is given by : + +$$ +\frac {\partial^ {2} \log P \left(\boldsymbol {o} _ {t} \mid \boldsymbol {z} _ {t}\right)}{\partial \boldsymbol {z} _ {t} \partial \boldsymbol {z} _ {t} ^ {T}} = - \boldsymbol {C} ^ {T} \boldsymbol {\Sigma} ^ {- 1} \boldsymbol {C} \tag {S10} +$$ + +# D.2 POISSON LIKELIHOOD + +To model spike trains, we assume that they are generated by a Poisson process with an underlying positive rate function for neuron $i$ given by: + +$$ +\mu_ {i} = \beta_ {i} f \left(\left(\boldsymbol {C} \boldsymbol {z}\right) _ {i} + b _ {i}\right) \Delta \tag {S11} +$$ + +where $f: \mathbb{R}^n \to \mathbb{R}_+^n$ is a nonlinear function (chosen to be an exponential when modelling the monkey recordings, and a soft ReLU-like nonlinearity elsewhere), $\Delta$ denotes the time bin size, and $\beta_i$ is a neuron-specific gain parameter. This yields the following log-likelihood: + +$$ +\log P \left(\boldsymbol {o} _ {t} \mid \boldsymbol {z} _ {t}\right) = \sum_ {i = 1} ^ {n _ {o}} \left(o _ {i} \log \mu_ {i} - \mu_ {i} + \log o _ {i}!\right) \tag {S12} +$$ + +where the sum is performed over neurons. Using the shorthand notations $h(x) = \log f(x)$ and $\pmb{a}_t = \pmb{C}\pmb{z}_t + \pmb{b}$ , the Jacobian and Hessian of this expression are given by: + +$$ +\frac {\partial \log P \left(\boldsymbol {o} _ {t} \mid \boldsymbol {z} _ {t}\right)}{\partial \boldsymbol {z} _ {t}} = \boldsymbol {C} ^ {T} \left[ \boldsymbol {o} _ {t} \odot h ^ {\prime} (\boldsymbol {a} _ {t}) - \Delta \boldsymbol {\beta} \odot f ^ {\prime} (\boldsymbol {a} _ {t}) \right] \tag {S13} +$$ + +$$ +\frac {\partial^ {2} \log P \left(\boldsymbol {o} _ {t} \mid \boldsymbol {z} _ {t}\right)}{\partial \boldsymbol {z} _ {t} \partial \boldsymbol {z} _ {t} ^ {T}} = \boldsymbol {C} ^ {T} \left[ \operatorname {d i a g} \left(\boldsymbol {o} _ {t} \odot h ^ {\prime \prime} \left(\boldsymbol {a} _ {t}\right)\right) - \Delta \operatorname {d i a g} \left(\beta \odot f ^ {\prime \prime} \left(\boldsymbol {a} _ {t}\right)\right) \right] \boldsymbol {C} \tag {S14} +$$ + +# E ILQR ALGORITHM + +Our recognition model makes use of the iterative Linear Quadratic Regulator algorithm (iLQR; Li and Todorov, 2004; Tassa et al., 2014) to find the mean of the posterior distribution $q_{\phi}(\boldsymbol{u}|\boldsymbol{o})$ . Iterative LQR is used to solve finite-horizon optimal control problems with non-linear dynamics and non-quadratic costs by (i) linearizing the dynamics locally around some initial trajectory, (ii) performing a quadratic approximation to the control cost around that same trajectory, (iii) solving the linear-quadratic problem generated by the local approximation to obtain better control inputs, and (iv) repeat until convergence, each time linearizing around the trajectory induced by the new inputs. Below, we first introduce the linear-quadratic regulator (LQR), and detail the approximation used in iLQR to turn any non-linear non-quadratic problem into one that can be solved with LQR. Moreover, we provide pseudo-code for our implementation of iLQR (see Algorithm 1). + +The Linear Quadratic Regulator is concerned with finding the set of controls $\pmb{u} \in \mathbb{R}^m$ that minimize a quadratic cost function $\mathcal{C}(\pmb{u})$ under deterministic linear dynamics, given by: + +$$ +\mathcal {C} (\boldsymbol {u}) = \sum_ {t = 0} ^ {T} \frac {1}{2} \left(\boldsymbol {z} _ {t} ^ {T} \boldsymbol {C} _ {t} ^ {z z} \boldsymbol {z} _ {t} + \boldsymbol {u} _ {t} ^ {T} \boldsymbol {C} _ {t} ^ {u u} \boldsymbol {u} _ {t} + \boldsymbol {z} _ {t} ^ {T} \boldsymbol {C} _ {t} ^ {z u} \boldsymbol {u} _ {t} + \boldsymbol {u} _ {t} ^ {T} \boldsymbol {C} _ {t} ^ {u z} \boldsymbol {z} _ {t}\right) + \boldsymbol {z} _ {t} ^ {T} \boldsymbol {c} _ {t} ^ {z} + \boldsymbol {u} _ {t} ^ {T} \boldsymbol {c} _ {t} ^ {u} \tag {S15} +$$ + +$$ +\text {s . t .} \boldsymbol {z} _ {t + 1} = \boldsymbol {A} _ {t} \boldsymbol {z} _ {t} + \boldsymbol {B} _ {t} \boldsymbol {u} _ {t} + \boldsymbol {h} _ {t}. \tag {S16} +$$ + +Here, $\pmb{A}_t \in \mathbb{R}^{n \times n}$ is a (possibly time-dependent) transition matrix, $\pmb{B}_t \in \mathbb{R}^{n \times m}$ represents the input channels at time $t$ , and $\pmb{h}_t$ is a state and input-independent term. Note that $\pmb{z} \in \mathbb{R}^n$ is a deterministic function of the initial condition $z_0$ and the sequence of inputs $\pmb{u}$ . LQR finds the inputs minimizing Equation S15 using a dynamic programming approach, by recursively finding the feedback rule $(\pmb{K}_t, \pmb{k}_t)$ which gives the optimal inputs to minimize the cost-to-go at each time $t$ as $\pmb{u}_t = \pmb{K}_t\pmb{z}_t + \pmb{k}_t$ . Details can be found in function Backward in Algorithm 1. + +iLQR is an extension of LQR to general dynamics and cost functions. Specifically, iLQR minimizes + +$$ +\mathcal {C} _ {\theta} (\boldsymbol {u}) = \sum_ {t = 0} ^ {T - 1} r _ {\theta} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}, t\right) \quad \text {s u b j e c t t o} \quad \boldsymbol {z} _ {t + 1} = \boldsymbol {f} _ {\theta} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}, t\right) \tag {S17} +$$ + +where $\theta$ denotes a set of parameters. At iteration $i$ , iLQR approximates both the dynamics and the cost around the current trajectory $\pmb{\tau}^i = (\pmb{z}^i, \pmb{u}^i)$ as: + +$$ +\tilde {\boldsymbol {f}} _ {\boldsymbol {\theta}} ^ {i} \left(\delta \boldsymbol {z} _ {t}, \delta \boldsymbol {u} _ {t}, t\right) \approx \boldsymbol {f} _ {\boldsymbol {\theta}} \left(\boldsymbol {\tau} ^ {i}\right) + \left(\nabla_ {\boldsymbol {z}} \boldsymbol {f} _ {\boldsymbol {\theta}}\right) ^ {T} \delta \boldsymbol {z} _ {t} + \left(\nabla_ {\boldsymbol {u}} \boldsymbol {f} _ {\boldsymbol {\theta}}\right) ^ {T} \delta \boldsymbol {u} _ {t} \tag {S18} +$$ + +and + +$$ +\begin{array}{l} \tilde {r} _ {\theta} ^ {i} \left(\delta \boldsymbol {z} _ {t}, \delta \boldsymbol {u} _ {t}, t\right) \approx r _ {\theta} \left(\boldsymbol {\tau} _ {t} ^ {i}\right) + \frac {1}{2} \left[ \delta \boldsymbol {z} _ {t} ^ {T} \left(\nabla_ {z z} ^ {2} r _ {\theta}\right) \delta \boldsymbol {z} _ {t} + 2 \delta \boldsymbol {z} _ {t} ^ {T} \left(\nabla_ {z u} ^ {2} r _ {\theta}\right) \delta \boldsymbol {u} _ {t} + \delta \boldsymbol {u} _ {t} ^ {T} \left(\nabla_ {u u} ^ {2} r _ {\theta}\right) \delta \boldsymbol {u} _ {t} \right] (S19) \\ + \delta \boldsymbol {z} ^ {T} \left(\nabla_ {z} r _ {\theta}\right) + \delta \boldsymbol {u} ^ {T} \left(\nabla_ {u} r _ {\theta}\right) (S20) \\ \end{array} +$$ + +Here, $\delta z$ and $\delta u$ refer to perturbations around the current nominal trajectory, and all $\nabla$ operators correspond to partial differentiation evaluated at the current nominal trajectory $(\pmb{u}^i,\pmb{z}^i)$ and corresponding time $t$ . + +The above equations are readily identified as a local LQR problem of the form of Equation S15, which can thus be solved using standard dynamic programming tools. Once $\delta \pmb{u}^{\star}$ minimizing Equation S19 has been computed, the inputs are updated as $\pmb{u}^{i + 1} = \pmb{u}^i +\delta \pmb{u}^\star$ , and the new state trajectory follows from simulating the dynamics forward with these new inputs. After each LQR update, we thus obtain a new trajectory $\tau^{i + 1}$ and the process repeats until convergence to some locally optimal trajectory $\tau^{\star}$ . + +Implementation details can be found in Algorithm 1. Note that the backward LQR pass involves inversion of the matrix $Q_{uu}$ (defined in Algorithm 1 function Backward). Depending on the specific form of the iLQR cost function, this might not always be positive-definite. Therefore, we include an adaptive Levenberg-Marquardt-type regularizer (not described in the pseudo-code) $Q_{uu} \gets Q_{uu} + \lambda I$ to maintain positive definiteness. Thus, iLQR effectively reverts to first-order gradient descent, as opposed to second-order optimization, whenever the locally quadratic approximation is a bad one. + +# F DIFFERENTIATING THROUGH ILQR + +Here we discuss how to efficiently differentiate through the iLQR algorithm. This becomes necessary when one wishes to differentiate through a function involving an iLQRsolve, such as the posterior mean of our recognition model (Equation 11). While a naive but simple strategy to achieve this would be to unroll the algorithm and gather gradients for every step, this is expensive both computationally and memory-wise. Amos et al. (2018) derived a way to analytically obtain gradients with respect to the parameters of iLQR, at the cost of a single LQR pass. Specifically, differentiating through an iLQRsolve is achieved by running iLQR to convergence, forming a linear-quadratic approximation around the converged trajectory, following the steps described in Appendix E and differentiating through the corresponding LQR problem. Below, we provide an alternative derivation to Amos et al.'s of the gradients of an LQR solution. + +# F.1 LQR OPTIMALITY CONDITIONS + +We now introduce use the more compact notation $\pmb{\tau}_{t} = \left[ \begin{array}{l}\pmb{z}_{t}\\ \pmb{u}_{t} \end{array} \right],\pmb{F}_{t} = \left[ \begin{array}{ll}\pmb{A}_{t} & \pmb{B}_{t} \end{array} \right],\pmb{C}_{t} = \left[ \begin{array}{ll}\pmb{C}_{t}^{zz} & \pmb{C}_{t}^{uz}\\ \pmb{C}_{t}^{zu} & \pmb{C}_{t}^{uu} \end{array} \right]$ which will be used in the rest of this section. + +As described in Appendix E, the finite-horizon, discrete-time LQR problem involves minimizing: + +$$ +\mathcal {J} = \sum_ {t = 0} ^ {T} \left(\frac {1}{2} \boldsymbol {\tau} _ {t} ^ {T} \boldsymbol {C} _ {t} \boldsymbol {\tau} _ {t} + \boldsymbol {c} _ {t} ^ {T} \boldsymbol {\tau} _ {t}\right) \tag {S21} +$$ + +subject to constraints on its dynamics + +$$ +\boldsymbol {\tau} _ {t + 1} = \boldsymbol {F} _ {t} \boldsymbol {\tau} _ {t} + \boldsymbol {f} _ {t}, \tag {S22} +$$ + +following the notation from Appendix E. To solve this problem, we write down the Lagrangian: + +$$ +\mathcal {L} = \sum_ {t = 0} ^ {T} \left(\frac {1}{2} \boldsymbol {\tau} _ {t} ^ {T} \boldsymbol {C} _ {t} \boldsymbol {\tau} _ {t} + \boldsymbol {c} _ {t} ^ {T} \boldsymbol {\tau} _ {t}\right) + \sum_ {t = 0} ^ {T - 1} \boldsymbol {\lambda} _ {t + 1} ^ {T} \left(\boldsymbol {F} _ {t} \boldsymbol {\tau} _ {t} + \boldsymbol {f} _ {t} - \boldsymbol {\tau} _ {t + 1}\right), \tag {S23} +$$ + +where $\lambda_1, \lambda_2, \dots, \lambda_T$ are adjoint (dual) variables that enforce the dynamic constraint. Differentiating with respect to $\lambda_t$ and $\tau_t$ enables us to obtain the set of equations satisfied by $\lambda$ and $\tau$ , also known as the KKT conditions (Kuhn and Tucker, 2014; Karush, 2014; Boyd et al., 2004): + +$$ +\left[ \begin{array}{l l} \boldsymbol {I} & \mathbf {0} \end{array} \right] \left(\boldsymbol {C} _ {t} \boldsymbol {\tau} _ {t} + \boldsymbol {c} _ {t}\right) + \boldsymbol {F} _ {t} ^ {T} \boldsymbol {\lambda} _ {t + 1} - \boldsymbol {\lambda} _ {t} = \mathbf {0} \tag {S24} +$$ + +$$ +\begin{array}{l} \boldsymbol {F} _ {t} \boldsymbol {\tau} _ {t} + \boldsymbol {f} _ {t} - \left[ \begin{array}{l l} \boldsymbol {I} & \boldsymbol {0} \end{array} \right] \boldsymbol {\tau} _ {t + 1} = \boldsymbol {0} (S25) \\ \boldsymbol {C} _ {T} \boldsymbol {\tau} _ {T} + \boldsymbol {c} _ {T} - \left[ \begin{array}{l l} \boldsymbol {I} & \boldsymbol {0} \end{array} \right] ^ {T} \boldsymbol {\lambda} _ {T} = \boldsymbol {0} (S26) \\ \boldsymbol {z} _ {0} - \left[ \begin{array}{l l} \boldsymbol {I} & \boldsymbol {0} \end{array} \right] \boldsymbol {\tau} _ {0} = \boldsymbol {0} (S27) \\ \end{array} +$$ + +Algorithm 1 iLQRsolve $(\mathcal{C}_{\theta}(\boldsymbol {u}),\boldsymbol{u}^{\mathrm{init}}))$ , with $\pmb {u}\in \mathbb{R}^{m}$ and $\mathcal{C}_\theta$ defined in Equation S17. + +Parameters: $\theta, \gamma$ + +iLQR + +$$ +\boldsymbol {\tau} ^ {0} = \operatorname {R o l l o u t} (\boldsymbol {u} ^ {\mathrm {i n i t}}) +$$ + +for $i = 1$ to converged do + +$$ +\mathbf {f o r} t = 0 \text {t o} T \mathbf {d o} +$$ + +$$ +\boldsymbol {F} _ {t} ^ {z} = \nabla_ {z} \boldsymbol {f} _ {\theta}, \boldsymbol {F} _ {t} ^ {u} = \nabla_ {u} \boldsymbol {f} _ {\theta} +$$ + +$$ +\boldsymbol {c} _ {t} ^ {\tilde {z}} = \nabla_ {z} r _ {\theta}, \boldsymbol {c} _ {t} ^ {u} = \nabla_ {u} r _ {\theta}, \boldsymbol {C} _ {t} ^ {z z} = \nabla_ {z} ^ {2} r _ {\theta}, \boldsymbol {C} _ {t} ^ {u u} = \nabla_ {u} ^ {2} r _ {\theta}, \boldsymbol {C} _ {t} ^ {u z} = \nabla_ {u z} ^ {2} r _ {\theta} +$$ + +end for + +$$ +\boldsymbol {k} _ {[ 0: T - 1 ]}, \boldsymbol {K} _ {[ 0: T - 1 ]} = \text {B a c k w a r d} \left(\boldsymbol {F} _ {[ 0: T ]} ^ {z}, \boldsymbol {F} _ {[ 0: T ]} ^ {u}, \boldsymbol {c} _ {[ 0: T ]} ^ {z}, \boldsymbol {c} _ {[ 0: T ]} ^ {u}, \boldsymbol {C} _ {[ 0: T ]} ^ {z z}, \boldsymbol {C} _ {[ 0: T ]} ^ {u z}, \boldsymbol {C} _ {[ 0: T ]} ^ {u u}\right)) +$$ + +$$ +\boldsymbol {\tau} ^ {i} = \operatorname {F o r w a r d} \left(\boldsymbol {K} _ {[ 0: T - 1 ]}, \boldsymbol {k} _ {[ 0: T - 1 ]}, \boldsymbol {\tau} ^ {i - 1}\right) +$$ + +end for + +function Rollout(u) + +$$ +z _ {0} = 0 +$$ + +$$ +\mathbf {f o r} t = 1 \text {t o} T \mathbf {d o} +$$ + +$$ +\boldsymbol {z} _ {t + 1} = \boldsymbol {f} _ {\theta} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) +$$ + +end for + +$$ +\operatorname {r e t u r n} \tau = \{z, u \} +$$ + +$\diamond$ function Backward $(F_{[0:T]}^{z}, F_{[0:T]}^{u}, c_{[0:T]}^{z}, C_{[0:T]}^{u}, C_{[0:T]}^{zz}, C_{[0:T]}^{uz}, C_{[0:T]}^{uu})$ + +$$ +\boldsymbol {v} _ {T} = \boldsymbol {c} _ {T} ^ {z}, \boldsymbol {V} _ {T} = \boldsymbol {C} _ {T} ^ {z z} +$$ + +for $t = T - 1$ to 0 do + +$$ +\boldsymbol {Q} _ {t} ^ {z z} = \boldsymbol {C} _ {t} ^ {z z} + \boldsymbol {F} _ {t} ^ {z \top} \boldsymbol {V} _ {t + 1} \boldsymbol {F} _ {t} ^ {z} +$$ + +$$ +\boldsymbol {Q} _ {t} ^ {u z} = \boldsymbol {C} _ {t} ^ {u z} + \boldsymbol {F} _ {t} ^ {u \top} \boldsymbol {V} _ {t + 1} \boldsymbol {F} _ {t} ^ {z} +$$ + +$$ +\boldsymbol {Q} _ {t} ^ {u u} = \boldsymbol {C} _ {t} ^ {u u} + \boldsymbol {F} _ {t} ^ {u ^ {\top}} \boldsymbol {V} _ {t + 1} \boldsymbol {F} _ {t} ^ {u} +$$ + +$$ +\boldsymbol {q} _ {t} ^ {z} = \boldsymbol {c} _ {t} ^ {z} + \boldsymbol {F} ^ {z} \boldsymbol {v} _ {t + 1} +$$ + +$$ +\boldsymbol {q} _ {t} ^ {u} = \boldsymbol {c} _ {t} ^ {u} + \boldsymbol {F} ^ {u} \boldsymbol {v} _ {t + 1} +$$ + +$$ +\boldsymbol {K} _ {t} = - \left(\boldsymbol {Q} _ {t} ^ {u u}\right) ^ {- 1} \boldsymbol {Q} _ {t} ^ {u z} +$$ + +$$ +\boldsymbol {k} _ {t} = - \left(\boldsymbol {Q} _ {t} ^ {u u}\right) ^ {- 1} \boldsymbol {q} _ {t} ^ {u} +$$ + +$$ +\boldsymbol {V} _ {t} = \boldsymbol {Q} _ {t} ^ {z z} + \boldsymbol {Q} _ {t} ^ {z u} \boldsymbol {K} _ {t} +$$ + +$$ +\boldsymbol {v} _ {t} = \boldsymbol {q} _ {t} ^ {z} + \boldsymbol {K} _ {t} ^ {T} \boldsymbol {q} _ {t} ^ {u} +$$ + +end for + +$$ +\text {r e t u r n} \boldsymbol {k} _ {[ 0: T - 1 ]}, \boldsymbol {K} _ {[ 0: T - 1 ]} +$$ + +$\diamond$ function Forward $(K_{[0:T - 1]},k_{[0:T - 1]},\tau = \{u,z\})$ + +$$ +\alpha = 1 +$$ + +repeat + +$$ +\hat {z} _ {0} = 0 +$$ + +$$ +\hat {\boldsymbol {u}} _ {0} = \alpha \boldsymbol {k} _ {0} +$$ + +$$ +\mathbf {f o r} t = 1 \text {t o} T \mathbf {d o} +$$ + +$$ +\hat {\boldsymbol {z}} _ {t} = f _ {\theta} \left(\hat {\boldsymbol {z}} _ {t - 1}, \hat {\boldsymbol {u}} _ {t - 1}\right) +$$ + +$$ +\hat {\boldsymbol {u}} _ {t} = \boldsymbol {K} _ {t} (\hat {\boldsymbol {z}} _ {t} - \boldsymbol {z} _ {t}) + \alpha \boldsymbol {k} _ {t} +$$ + +end for + +$$ +\alpha = \gamma \alpha +$$ + +until $\mathcal{C}_{\theta}(\hat{\pmb{u}}) < \mathcal{C}_{\theta}(\pmb {u})$ + +$$ +\text {r e t u r n} \tau = \{\hat {z}, \hat {u} \} +$$ + +Rearranging, we can rewrite the KKT conditions in matrix form as: + +$$ +\underbrace {\left( \begin{array}{c c c c c} \ddots & & & & \\ & \boldsymbol {C} _ {t} & \boldsymbol {F} _ {t} ^ {T} & & \\ & \boldsymbol {F} _ {t} & \boldsymbol {0} & [ - \boldsymbol {I} \quad \boldsymbol {0} ] & \\ & & [ - \boldsymbol {I} \quad \boldsymbol {0} ] ^ {T} & \boldsymbol {C} _ {t + 1} & \boldsymbol {F} _ {t + 1} ^ {T} \\ & & & \boldsymbol {F} _ {t + 1} & \boldsymbol {0} \\ & & & & \ddots \end{array} \right)} _ {K} \underbrace {\left( \begin{array}{c} \vdots \\ \boldsymbol {\tau} _ {t} \\ \boldsymbol {\lambda} _ {t + 1} \\ \boldsymbol {\tau} _ {t + 1} \\ \boldsymbol {\lambda} _ {t + 2} \\ \vdots \end{array} \right)} _ {p} = - \underbrace {\left( \begin{array}{c} \vdots \\ \boldsymbol {c} _ {t} \\ \boldsymbol {f} _ {t} \\ \boldsymbol {c} _ {t + 1} \\ \boldsymbol {f} _ {t + 1} \\ \vdots \end{array} \right)} _ {q} \tag {S28} +$$ + +These optimality conditions are satisfied for the solution to the optimization problem $p^{\star} = (\tau_0^{\star}, \dots, \tau_T^{\star}, \lambda_1^{\star}, \dots, \lambda_T^{\star})$ . Equation S28 implies that the solution of the LQR problem $p^{\star}$ will satisfy: + +$$ +\boldsymbol {p} ^ {\star} = - \boldsymbol {K} ^ {- 1} \boldsymbol {q}. \tag {S29} +$$ + +Computing this quickly becomes infeasible as $\pmb{K}$ grows with long-time horizons, and Equation S28 is typically solved in linear time using a dynamic programming approach, as described in Appendix E. + +# F.2 BACKPROPAGATING THROUGH THE LQR SOLVER + +Differentiating through an LQR solve boils down to differentiating through the backsolve in Equation S29. In the following, we denote the adjoint of parameter $\theta$ as $\bar{\theta}$ . From Giles (2008), we know that the adjoint of the backsolve operation is given by: + +$$ +\bar {\boldsymbol {q}} = - \boldsymbol {K} ^ {- T} \bar {\boldsymbol {p}}, \tag {S30} +$$ + +$$ +\bar {\boldsymbol {K}} = - \boldsymbol {K} ^ {- T} \bar {\boldsymbol {p}} \boldsymbol {p} ^ {T} = \bar {\boldsymbol {q}} \boldsymbol {p} ^ {T}. \tag {S31} +$$ + +We note that Equation S30 has the same form as Equation S29, which means we can compute $\bar{\pmb{q}} = (\dots ,\bar{\pmb{c}}_t,\bar{\pmb{f}}_t,\dots)^T$ by solving another LQR problem. After solving for $\bar{\tau}$ , we can then compute $\bar{\pmb{K}}$ as an outer-product of $\bar{z}$ with $\pmb{y}$ to get: + +$$ +\begin{array}{l} \underbrace {\left( \begin{array}{c c c c c} \ddots & & & & \\ & \bar {K} _ {C _ {t}} & \bar {K} _ {F _ {t} ^ {T}} & & \\ & \bar {K} _ {F _ {t}} & & & \\ & & \bar {K} _ {C _ {t + 1}} & \bar {K} _ {F _ {t + 1} ^ {T}} & \\ & & \bar {K} _ {F _ {t + 1}} & & \\ & & & & \ddots \end{array} \right)} _ {\bar {K}} (S32) \\ = \left( \begin{array}{c} \vdots \\ \bar {\boldsymbol {c}} _ {t} \\ \bar {\boldsymbol {c}} _ {t + 1} \\ \bar {\boldsymbol {f}} _ {t + 1} \\ \vdots \end{array} \right) \left(\dots \quad \boldsymbol {\tau} _ {t} ^ {\star T} \quad \boldsymbol {\lambda} _ {t + 1} ^ {\star T} \quad \boldsymbol {\tau} _ {t + 1} ^ {\star T} \quad \boldsymbol {\lambda} _ {t + 2} ^ {\star T} \quad \dots\right) (S33) \\ = \left( \begin{array}{c c c c c} \ddots & & & & \\ & \bar {c} _ {t} \boldsymbol {\tau} _ {t} ^ {\star T} & \bar {c} _ {t} \boldsymbol {\lambda} _ {t} ^ {\star T} & & \\ & \bar {f} _ {t + 1} \boldsymbol {\tau} _ {t} ^ {\star T} & & & \\ & & \bar {c} _ {t + 1} \boldsymbol {\tau} _ {t + 1} ^ {\star T} & \bar {c} _ {t + 1} \boldsymbol {\lambda} _ {t + 1} ^ {\star T} & \\ & & \bar {c} _ {t + 1} \boldsymbol {\lambda} _ {t + 1} ^ {\star} & & \\ & & & & \ddots \end{array} \right). (S34) \\ \end{array} +$$ + +Collecting all the gradients of $\bar{C}_t$ and $\bar{F}_t$ , we arrive at + +$$ +\bar {\boldsymbol {C}} _ {t} = \frac {1}{2} \left(\bar {\boldsymbol {K}} _ {\boldsymbol {C} _ {t}} + \bar {\boldsymbol {K}} _ {\boldsymbol {C} _ {t}} ^ {T}\right) = \frac {1}{2} \left(\bar {\boldsymbol {c}} _ {t} \boldsymbol {\tau} _ {t} ^ {\star T} + \boldsymbol {\tau} _ {t} ^ {\star} \bar {\boldsymbol {c}} _ {t} ^ {T}\right) \tag {S35} +$$ + +$$ +\bar {\boldsymbol {F}} _ {t} = \bar {\boldsymbol {K}} _ {\boldsymbol {F} _ {t}} + \bar {\boldsymbol {K}} _ {\boldsymbol {F} _ {t} ^ {T}} ^ {T} = \bar {\boldsymbol {f}} _ {t + 1} \boldsymbol {\tau} _ {t} ^ {\star T} + \boldsymbol {\lambda} _ {t + 1} ^ {\star} \bar {\boldsymbol {c}} _ {t} ^ {T}. \tag {S36} +$$ + +Note that we have symmetrized the adjoint of $C_t$ , which ensures that $C_t$ remains symmetric after each gradient update. The antisymmetric part of $C_t$ does not contribute to the LQR cost. + +Finally, one subtlety arises from the fact that Equation S22 and Equation S21 are written as a function of $\tau$ in the general LQR setting. In the iLQR case however, the LQR problem is local at each iteration, and $\delta \tau$ vanishes at convergence. If we denote by $i$ the last iteration before declaring convergence, one can however write the problem as a function of the variable of interest $\tau^{\star}$ , using: + +$$ +\begin{array}{l} \mathcal {J} = \sum_ {t = 0} ^ {T} \left(\frac {1}{2} \left(\boldsymbol {\tau} _ {t} ^ {\star T} - \boldsymbol {\tau} _ {t} ^ {i T}\right) \boldsymbol {C} _ {t} \left(\boldsymbol {\tau} _ {t} ^ {\star} - \boldsymbol {\tau} _ {t} ^ {i}\right) + \boldsymbol {c} _ {t} ^ {T} \left(\boldsymbol {\tau} _ {t} ^ {\star} - \boldsymbol {\tau} _ {t} ^ {i}\right)\right) (S37) \\ = \sum_ {t = 0} ^ {T} \left(\frac {1}{2} \boldsymbol {\tau} _ {t} ^ {\star T} \boldsymbol {C} _ {t} \boldsymbol {\tau} _ {t} ^ {\star} + \left(\boldsymbol {c} _ {t} ^ {T} - \boldsymbol {\tau} _ {t} ^ {i} \boldsymbol {C} _ {t}\right) \boldsymbol {\tau} _ {t} ^ {\star}\right) + \operatorname {c s t} (S38) \\ \end{array} +$$ + +subject to constraints on its dynamics + +$$ +\boldsymbol {\tau} _ {t + 1} ^ {\star} = \boldsymbol {F} _ {t} \boldsymbol {\tau} _ {t} ^ {\star} + \boldsymbol {f} _ {t} - \boldsymbol {F} _ {t} \boldsymbol {\tau} _ {t} ^ {i}. \tag {S39} +$$ + +This implies that the values for $C$ and $c$ need to be adjusted accordingly, such as to reflect the switch of variable from $\delta \tau$ during the optimization to the fixed point $\tau^{\star}$ to compute gradients. Note that at convergence we can use $\tau^i \approx \tau^{\star}$ , giving access to all the necessary variables to compute gradients with respect to $\theta$ . + +# G DETAILS OF EXPERIMENT 1 + +![](images/254ff623ceab31c2a31d77540af61c550c95a720547e14cb77f28c99f88e0a91.jpg) +Figure S2: Illustration of the learning curves for 5 different runs of the "forced autonomous" iLQR-VAE model. The model consistently gets stuck in plateaus during the optimization, leading to a slower convergence than their "free" counterparts (see Figure 1). + +The data in Section 3.1 was generated from an autonomous linear dynamical system with $n = 8$ , $m = 3$ , and $n_o = 8$ where $n_o$ is the dimension of the observation space. All the models were fit using the dynamics within the ground-truth model class, i.e. with linear dynamics, $n = 8$ , $m = 3$ , and $n_o = 8$ . We optimized the model parameters with Adam, using (manually optimized) learning rates of $0.04 / (1 + \sqrt{k / 1})$ for the free iLQR-VAE model, $0.04 / (1 + \sqrt{k / 1}$ for autonomous iLQR-VAE and $0.02 / (1 + \sqrt{k / 30}$ for LFADS, where $k$ is the iteration number. We used GRU networks with 32 units to parametrize the LFADS encoders (one encoder for the initial condition and one for the inputs). Note that while all methods run in similar wallclock time in this example, this will ultimately be implementation and data-dependent. + +In Figure S2, we show additional learning curves for the "forced autonomous" models; these show that, even for different initializations and trajectories through the loss landscape, the model consistently gets stuck in plateaus. This can be contrasted with the free-form iLQR-VAE models. + +# H FURTHER DETAILS OF LORENZ ATTRACTOR + +The chaotic Lorenz attractor consists of a three-dimensional state $(\ell_1,\ell_2,\ell_3)$ evolving according to + +$$ +\dot {\ell} _ {1} = 1 0 \left(\ell_ {2} - \ell_ {1}\right) \quad \dot {\ell} _ {2} = \ell_ {1} \left(2 8 - \ell_ {3}\right) - \ell_ {2} \quad \dot {\ell} _ {3} = \ell_ {1} \ell_ {2} - 8 \ell_ {3} / 3 \tag {S40} +$$ + +For our example, we generated data by integrating Equation S40 over a long time period using a Runge-Kutta solver (RK4) followed by z-scoring and splitting the resulting state trajectory into 112 non-overlapping bouts (Figure 2A). We added Gaussian noise with a standard deviation of 0.1, and trained iLQR-VAE on this dataset (Figure 2B, bottom). We then fitted these data using GRU dynamics with $n = 20$ and $m = 5$ . + +The normalized $k$ -step mean-squared error was defined as follows: + +$$ +\mathrm {M S E} _ {k} = \sum_ {t = 0} ^ {T - k} \| \boldsymbol {x} _ {t + k} - \hat {\boldsymbol {x}} _ {t + k} \| ^ {2} \tag {S41} +$$ + +$$ +R _ {k} ^ {2} = 1 - \frac {\mathrm {M S E} _ {k}}{\sum_ {t = 0} ^ {T - k} \left\| \boldsymbol {x} _ {t + k} - \bar {\boldsymbol {x}} \right\| ^ {2}} \tag {S42} +$$ + +where $\hat{\pmb{x}}_{t + k}$ is the prediction at time $t + k$ , and $\bar{x}$ the mean for this trial. + +# I LEARNING INPUT-DRIVEN NONLINEAR DYNAMICS + +To bridge the gap between autonomous nonlinear dynamics (see Section 3.2) and real data, we evaluated iLQR-VAE on an input-driven nonlinear system, the Duffing oscillator. We generated Duffing trajectories that included a perturbation of the Duffing state half-way through. We then embedded those into the spiking activity of 200 neurons (see below). We found that iLQR-VAE could successfully learn the dynamics and infer the timing of the perturbations. + +![](images/a6a0923fc041083a5d89b6a3f76dcdde7d6a0d5d8cdf3a7b1ae0af0ad1da6ea4.jpg) +Figure S3: (A) Top: example Duffing trajectories (100 time steps, $dt = 0.03$ ) starting from random initial conditions. Bottom: negative ELBO during the course of training. (B) Top: spike raster corresponding to an example test sample. Middle: first two principal components of the posterior mean over firing rates (red), given the spiking data shown at the top. For comparison, the ground-truth PCs in the first half of the trial (before the perturbation) are shown as black dots, with their hypothetical unperturbed continuation shown as a dashed line. The second half of the ground-truth PCs (after the perturbation) are shown in gray. Bottom: norm of the inferred input. + +![](images/302c4c4869d600a4b6f33148fa6b4f4a897c479210efce61c6b50f4be6ed2ed1.jpg) + +The dynamics of the Duffing oscillator are given by + +$$ +\dot {x} _ {1} = x _ {2} \quad \dot {x} _ {2} = x _ {1} - x _ {1} ^ {3}. \tag {S43} +$$ + +To generate each training sample, we integrated Equation S43 from two different random initial conditions for 100 time steps each using a Runge-Kutta solver (RK4) with $dt = 0.03$ . Example such trajectories are shown in Figure S3A (top); note that each trajectory can be understood as the evolution of the system in state-space for a given energy level of the oscillator. We then concatenated those two trajectories to yield a single trajectory of 200 steps with a perturbation in the middle. We then linearly mapped the low-dimensional oscillator onto a 200-dimensional state, before passing it through the nonlinearity of Equation S7 to obtain a set of firing rates, which then gave rise to observations via a Poisson process (Figure S3B, top). + +We generated 112 training and 112 testing trials in this way. We fit these data using iLQR-VAE with $n = 20$ , $m = 4$ , and found that it could successfully infer the latent trajectories (see Figure S3B, middle). Importantly, iLQR-VAE learned to fit most of the trajectories as an autonomously evolving dynamical system, and only used inputs to explain the sudden change in the oscillator's energy level triggered by the perturbation (see Figure S3B, bottom). This shows that the model can successfully disentangle ongoing dynamics from external inputs, suggesting that it is well-suited for identifying input-driven dynamics in real data. + +# J COMPARISON OF LFADS AND ILQR-VAE ON A TOY INPUT INFERENCE TASK + +![](images/4c3d38db909864256de38d988632659b41876004f9d0ba7b1f4887a1c9fbc3fd.jpg) +Figure S4: Details of the sparse input inference in LFADS and iLQR-VAE. (A) Top: example observations (black dots) and inferred posterior mean (blue line). Bottom: true and inferred inputs. LFADS can infer the timing of the largest input, but also uses non-zero inputs during the rest of the trial. (B) Comparison of the true (black) and learned (blue) eigenvalue spectra. + +![](images/2ced180dd137e3fec82d894abba0e2dc85fe9caabf7fe30a2080117991d77308.jpg) + +We used the LFADS implementation from https://github.com/google-research/computation-thru-dynamics/tree/master/lfads_tutorial, which we modified to include linear dynamics and Gaussian likelihoods. We then evaluated the quality of the input reconstruction by measuring how much input variance was captured by the models. We report this as the $R^2$ from inferred to true inputs. + +We used a generative model within the ground truth model class. For each dataset, we performed a hyper-parameter search to choose the best-performing encoder architecture and learning rate for LFADS. + +Results of this experiment are summarized in Table S1. iLQR-VAE – which did not require any hyperparameter tuning for these examples – inferred inputs more accurately for all dataset sizes and trial lengths. + +
LFADSiLQR-VAE
S 1x10000.05 ± 0.020.94 ± 0.01
S 10x1000.15 ± 0.060.84 ± 0.01
S 32x1000.29 ± 0.010.83 ± 0.05
S 56x1000.31 ± 0.080.80 ± 0.02
S 10x2000.27 ± 0.010.93 ± 0.01
AR 56x1000.28 ± 0.020.81 ± 0.02
+ +Table S1: Comparison of iLQR-VAE and LFADS on 6 input inference tasks. Results are reported as $R^2$ (mean ± sem) over 3 random seeds for each. + +Our results suggest that LFADS' performance improves with larger amounts of data. More surprisingly, LFADS also seems to perform better when the data is split into shorter trials. In particular, we found it difficult to fit LFADS on the single long trial, but the dynamics could be learned more accurately if this data was split into 10 trials of 100 steps. On the other hand, iLQR-VAE inferred + +![](images/8733f2b6f7a3f929a44ce0a219a2b9796200c7c92935fb5788f817006893271a.jpg) +Figure S5: Comparison of the Student (pink) and Gaussian (green) prior for learning a linear dynamical system driven by autoregressive inputs (AR) or sparse inputs. (A) Fit of the observations using the Gaussian (top) and Student (bottom) priors. Both allow to fit the observations highly accurately. (B) Inferred input norm for both priors. The temporal structure of the signal is very similar in both cases, as the Student prior essentially becomes Gaussian for large values of $\nu$ . (C) Evolution of the loss for both choices of prior. The loss curves are closely aligned, and both models converge to a similar ELBO value. (D) Fit of the observations using the Gaussian (top) and Student (bottom) priors. Both models allow to fit the observations, but the Student prior allows to obtain smoother trajectories. (E) Inferred input norm for both priors. The Student prior is close to 0 at all times, except when it requires a sharp input to explain the data. On the other hand, the Gaussian prior requires a large variance to be able to fit sparse inputs, leading to non-zero inferred inputs at all time points. (F) Evolution of the loss for both prior choices. The loss curve for the Student prior is lower than the Gaussian one throughout training, and converges to a higher ELBO value. + +inputs more accurately for longer trials. This is what we would expect if the model is well learnt, as longer trials contain more information to fit the inputs accurately. + +One important distinction between the two methods, which partly explains LFADS' lower $R^2$ , is the prior it over inputs (auto-regressive prior for LFADS and Student for iLQR-VAE). In Figure S4 we show an example of LFADS, on one of the test examples of the S 56x100 dataset. In this example, LFADS infers its largest input concurrently to the ground truth input, but also infers small inputs when there are none in the ground truth. This has a significant impact on the $R^2$ metric. Note however that this is not the only effect at play here, as emphasized by the lower performance on the AR dataset. The impact of the choice of prior in iLQR-VAE is discussed further in Appendix K. + +# K COMPARISON OF THE STUDENT AND GAUSSIAN PRIORS + +In this section, we compare the performance of the Gaussian and Student priors on two toy examples. The first consists of data generated by a linear dynamical system ( $n = 3$ , $m = 3$ , $n_{o} = 10$ , Gaussian likelihood) driven by autoregressive Gaussian inputs (close to the Gaussian prior). The second one uses the same system, but driven by sparse inputs (closer to the Student prior). We find that in the first example, both priors yield extremely similar results (see Figure S5(A-C)). Indeed, the Student prior learns a very high value of $\nu \sim 20$ , thus becoming nearly Gaussian. + +In the sparse input case however, the Student prior allows to fit the data considerably better. As we can see in Figure S5(D-F), the Gaussian prior learns a large variance to fit the sparse inputs, leading to higher baseline noise than in the true system. + +As shown here, the Student prior offers a more flexible model, as the Gaussian case is recovered for large $\nu$ values. Note however that using the Gaussian prior ensures that the input term in the iLQR cost function is always convex in u, which can facilitate the optimization and allow iLQR to + +converge faster in some cases. Moreover, in the case of autonomous dynamics (e.g. Lorenz attractor and Maze dataset) both priors will converge to the same solution. + +# L FURTHER DETAILS OF SINGLE TRIAL ANALYSES + +# L.1 BENCHMARKING AGAINST EXISTING METHODS + +To allow for direct comparison with benchmarks reported Pei et al. (2021), we first used data provided by the Neural Latents Benchmark (NLB) challenge, available at https://gui.dandiarchive.org/#/dandiset/000128. + +We used 1720 training trials and 510 validation trials, which were drawn randomly for each instantiation of the model to avoid overfitting to test data. The risk of overfitting to the dataset was lowered by the fact that iLQR-VAE requires very little hyperparameter optimization. For this experiment, we fitted iLQR-VAE to the neural activity using a model with MGU dynamics ( $n = 60$ ), a Student prior over inputs ( $m = 15$ ), and a Poisson likelihood ( $n_{o} = 182$ neurons). We trained models on trials spanning all reach conditions and restricting data to a time window starting $250~\mathrm{ms}$ before and ending $450~\mathrm{ms}$ after movement onset, binned at $5\mathrm{ms}$ . For regression to hand velocity, we introduced a lag of $100\mathrm{ms}$ between neural activity and hand velocity. As the test data used in the NLB challenge is not publicly available, the results we reported were not computed on the exact same data split. However, the model performed highly consistently across random seeds, such that we expect iLQR-VAE's performance to be directly comparable to the results from Pei et al. (2021). To fit these data, we ran iLQR-VAE on 168 CPUs for $\sim 6\mathrm{h}$ , using a mini-batch size of 168 trials. + +The co-smoothing metric used to assess how well the model fit the data is defined as log-likelihood score : + +$$ +\frac {1}{n _ {s p} \log 2} \left(\mathcal {L} (\boldsymbol {\lambda}; \hat {\boldsymbol {y}} _ {n, t}) - \mathcal {L} (\hat {\boldsymbol {\lambda}} _ {n}, \hat {\boldsymbol {y}} _ {n, t})\right) \tag {S44} +$$ + +where the overall log-likelihood $\mathcal{L}$ is the sum of all the log-likelihoods evaluated at all points and for all neurons, $\lambda$ denotes the vector inferred time-varying firing rates, $\hat{\lambda}_n$ is the mean firing rate for neuron $n$ and $n_{sp}$ is the total number of spikes. + +# L.2 FURTHER ANALYSES + +A key feature of monkey M1 motor cortical recordings is the prevalence of rotational dynamics in the data (Churchland et al., 2012). These can be captured using jPCA, a method developed to find the subspace in which the dynamics are most rotational, which was recently generalized by Rutten et al. (2020). Here, we found that we could uncover clean rotational dynamics from the single-trial firing rates, similarly to Pandarinath et al. (2018). + +![](images/91171e97ced975de5ac03bedf7ac1b0b19dfddd5ce10871d224325da63231127.jpg) +Figure S6: Projection of the neural activity of 200 movements in the subspace defined by the top two jPCA axes. jPCA finds the subspace capturing most rotations in the data, while spanning the same space as the top 2 principal components. Here, the jPCA subspace was found using the single-trial firing rates. Projection of the neural activity yields very clean rotational trajectories. + +# M FURTHER DETAILS OF THE CONTINUOUS REACHING TASK ANALYSIS + +# M.1 DETAILS OF THE ANALYSES + +For our analyses of the primate data in Section 3.4, we considered the first 22 minutes of the recording session 'indy_20160426' from O'Doherty et al. (2018). We binned spikes at 25 ms resolution + +and considered all neurons with a firing rate of at least $2\mathrm{Hz}$ . Behavioural data took the form of the velocity of the hand of the monkey in the xy-plane and were extracted as the first derivative of a cubic spline fitted to the position over time. We z-scored the hand velocity and shifted it by 120ms, following on Jensen et al. (2021). + +To fit iLQR-VAE, the resulting dataset was divided into 336 non-overlapping pseudo-trials of which a random half were used to fit the generative model and the other half of the trials were used as a held-out test dataset. We fitted a model with $n = 50$ , $m = 10$ using the non-linear dynamics described in Equation S4. The latent state was then mapped onto both the kinematics and neural observations. We used a linear readout from latents to 2D kinematics variables, and a linear readout following by a nonlinearity from latents to the firing rates of 130 neurons. + +After fitting the iLQR-VAE to neural activity and behavior jointly, we then proceeded to infer $\mathbf{u}$ from neural activity alone. Next, we computed the kinematic reconstruction error on the test dataset as the fraction of variance captured in both x- and y- hand velocities. + +Finally, we analyzed the inputs to the model after fitting, in relation to specific events in the task and the behaviour. We defined 'movement onset' after each target onset as the time at which the hand speed first exceeded $0.03\mathrm{ms}^{-1}$ . We aligned the z-scored input $\pmb{u}$ on each trial to target onset and movement onset separately for visualization purposes. We performed a similar analysis for hand speed and mean z-scored neural activity which were also z-scored and aligned to target and movement onset for comparison with the control input. + +# M.2 COMPARISON OF ILQR-VAE AND BGPFA + +![](images/27ee4d63c9a563cf74d427e083474eba39833dfb9956a8849472f964fe18e789.jpg) +Figure S7: Comparison of the firing rates inferred by bGPFA and iLQR-VAE on the first 100ms of the continuous reaching task data, for three different neurons. bGPFA learns smoother trajectories. On the other hand, fitting iLQR-VAE with a Gaussian prior with no temporal structure allows to capture more variance in the firing rates, which in turn leads to a better decoding of the kinematics. + +As a further way of understanding the relative benefits and disadvantages of iLQR-VAE, we compared its performance with bGPFA, a fully Bayesian extension of GPFA (Yu et al., 2009) that enables the use of non-Gaussian likelihoods, scales to very large datasets, and was recently shown to outperform standard GPFA on this same continuous reaching dataset (Jensen et al., 2021). Importantly, bGPFA makes different assumptions to iLQR-VAE, as it places a smooth prior directly on the latents with no explicit notion of dynamics. We fit both methods using 10 minutes of data (chunked into pseudo-trials for iLQR-VAE and as a continuous trial for bGPFA). For iLQR-VAE we then performed inference and retrained the posterior covariance on the first minute of data whilst fixing the generative parameters. We found that while both methods captured similar trends in the firing rates, bGPFA yielded smoother estimates, but iLQR-VAE captured larger modulations (consistent with the higher $R^2$ when regressing from firing rates to hand velocity). Note that the firing rate estimates here are not as smooth as for the Maze dataset (c.f. Figure 4A), because iLQR-VAE was fit using a Gaussian prior over inputs with non-zero variance at all times, effectively implying an autoregressive prior on the latent trajectories and firing rates. + +From Figure S7, one can notice that bGPFA struggles to capture larger variations in the firing rate. This suggests oversmoothing, and might explain why the method does not capture hand kinematics + +as well as iLQR-VAE ( $R^2 = 0.6$ for bGPFA and 0.76 for iLQR-VAE.) This is indeed what we see in Figure S8. + +![](images/81b7c551b599728cc0acea66488fedaa2152bf792f85bc2287c2e8a4b57ff91e.jpg) +Figure S8: Ground truth hand velocity (red) and decoded kinematics using iLQR-VAE (blue) and bGPFA (dotted black). A linear decoder was trained on 9 minutes of data and then evaluated on the first minute of the recording (shown here). bGPFA struggles to capture the biggest peaks in the velocity, consistent with the smoother firing rates and the lower $R^2$ . + +# N LINK TO KALMAN FILTERING + +The Linear Quadratic Regulator and the Kalman filter (Kalman, 1964) are algorithms designed for systems with linear dynamics and Gaussian noise. LQR finds the optimal feedback control law to minimize a cost $\mathcal{C}$ in deterministic systems, while the Kalman filter yields an estimate of the state from observations corrupted with process and observation noise. It is well-known that Kalman filtering and LQR are dual of one another, and they can both be combined into LQG to yield an optimal control law from noisy observations. Here, we explore another link between LQR and Kalman smoothing, by showing how LQR can be used as a Kalman smoother. Moreover, in order to gain insights into the learning process of iLQR-VAE, we explore different procedures for learning the parameters of a Kalman filter. + +# LINEAR QUADRATIC CONTROL AS FILTERING + +The Kalman smoother assumes dynamics of the form + +$$ +\boldsymbol {z} _ {t + 1} = \boldsymbol {A} \boldsymbol {z} _ {t} + \boldsymbol {B} ^ {K} \boldsymbol {w} _ {t} \tag {S45} +$$ + +$$ +\boldsymbol {o} _ {t} = \boldsymbol {C} \boldsymbol {z} _ {t} + \boldsymbol {v} _ {t} \tag {S46} +$$ + +with $\boldsymbol{w} \sim \mathcal{N}(\mathbf{0}, \boldsymbol{I})$ , $\boldsymbol{v} \sim \mathcal{N}(\mathbf{0}, \Sigma_v)$ , and the initial condition is assumed to be generated by a Gaussian distribution with known parameters $\boldsymbol{z}_1 \sim \mathcal{N}(\boldsymbol{\mu}, \boldsymbol{\Pi})$ . + +On the other hand, LQR assumes the following fully-deterministic dynamics: + +$$ +\boldsymbol {z} _ {t + 1} = \boldsymbol {A} \boldsymbol {z} _ {t} + \boldsymbol {B} ^ {I} \boldsymbol {u} _ {t} \tag {S47} +$$ + +$$ +o _ {t} = C z _ {t} \tag {S48} +$$ + +with $z_{1}$ known exactly. + +Note than in the iLQR-VAE framework we have thus far only considered cases where no observed external inputs were given. However these can be straightforwardly included as an additional $\hat{B}\hat{u}$ term in Equation S47 and Equation S45. + +The Kalman smoother's objective is to minimize the expected mean squared error between the inferred latent state and the true state, $\mathbb{E}\left[\| \pmb {x} - \hat{\pmb{x}}\| ^2\right]$ . As described in Aravkin et al. (2017), with linear dynamics and Gaussian noise, this becomes equivalent to minimizing the following objective w.r.t $z$ : + +$$ +\mathcal {L} (\boldsymbol {z}) = \left\| \boldsymbol {\Pi} ^ {- 1 / 2} \left(\boldsymbol {z} _ {1} - \boldsymbol {\mu}\right) \right\| ^ {2} + \sum_ {t = 1} ^ {T - 1} \left\| \boldsymbol {B} ^ {K} \left(\boldsymbol {z} _ {t + 1} - \boldsymbol {A} \boldsymbol {z} _ {t}\right) \right\| ^ {2} + \sum_ {t = 1} ^ {T} \left\| \boldsymbol {\Sigma} _ {\boldsymbol {v}} ^ {- 1 / 2} \left(\boldsymbol {o} _ {t} - \boldsymbol {C} \boldsymbol {z} _ {t}\right) \right\| ^ {2} \tag {S49} +$$ + +where the first two terms correspond to the prior over the initial condition and smoothness of the trajectory, and the last term represents the likelihood of the observations. Interestingly, this can be related to the objective we minimize to find the posterior mean in iLQR-VAE (Equation 11): + +$$ +\begin{array}{l} \mathcal {L} (\boldsymbol {u}) = \left\| \boldsymbol {\Sigma} _ {0} ^ {- 1 / 2} \boldsymbol {u} _ {0} \right\| ^ {2} + \sum_ {t = 1} ^ {T - 1} \left\| \boldsymbol {\Sigma} _ {u} ^ {- 1 / 2} \boldsymbol {u} _ {t} \right\| ^ {2} + \sum_ {t = 1} ^ {T} \left\| \boldsymbol {\Sigma} _ {\boldsymbol {v}} ^ {- 1 / 2} \left(\boldsymbol {o} _ {t} - \boldsymbol {C} \boldsymbol {z} _ {t}\right) \right\| ^ {2} (S50) \\ = \left\| \boldsymbol {\Sigma} _ {0} ^ {- 1 / 2} \boldsymbol {u} _ {0} \right\| ^ {2} + \sum_ {t = 1} ^ {T - 1} \left\| \boldsymbol {\Sigma} _ {u} ^ {- 1 / 2} \left(\boldsymbol {z} _ {t + 1} - \boldsymbol {A} \boldsymbol {z} _ {t}\right) \right\| ^ {2} + \sum_ {t = 1} ^ {T} \left\| \boldsymbol {\Sigma} _ {\boldsymbol {v}} ^ {- 1 / 2} \left(\boldsymbol {o} _ {t} - \boldsymbol {C} \boldsymbol {z} _ {t}\right) \right\| ^ {2}. (S51) \\ \end{array} +$$ + +The right-hand sides of Equation S49 and Equation S50 become identical when $\pmb{\Sigma}_{u} = \pmb{B}^{K}$ and $\pmb{\Sigma}_{0} = \pmb{\Pi}$ . Note that the introduction of the $B^{I}$ matrix in Equation S45 unties the two formulations slightly by allowing for further mixing between the input channels that isn't accounted for by the prior. In the examples we consider next, we therefore set $B^{I} = I$ . + +The above equations show how LQR can be used to solve the standard Kalman filtering problem, with the key difference being that the optimization is performed over inputs $\mathbf{u} = \{\mathbf{u}_0,\dots ,\mathbf{u}_{T - 1}\}$ rather than latent trajectories $z = \{z_{1},\ldots ,z_{T}\}$ directly. This is illustrated in Figure S9(A), where a Rauch-Kung-Striebel (RKS) smoother and LQR were ran on the same set of 8-dimensional observations arising from an 8-dimensional linear dynamical system, and inferred the same latent trajectory given the ground-truth parameters. As we only use LQR to parametrize the mean of the posterior distribution, we trained the recognition model for 100 steps to get the uncertainty over the latents, which was very similar to the output of the RKS smoother. + +# LEARNING A KALMAN FILTER + +We then proceeded to learn the parameters of the models using either iLQR-VAE, an Expectation-Maximization (EM) procedure, or direct minimization of the negative log likelihood of the data (Figure S9B-C). + +Interestingly, the EM algorithm is closely related to iLQR-VAE, since the E-step finds the latent trajectories minimizing Equation S49, when iLQR-VAE solves Equation S50 in an inner optimization loop. While there exists an analytical solution for the M-step in the case of the Kalman filter, this does not generalize to nonlinear dynamics and non-Gaussian noise. Therefore, we used a gradient descent procedure for the maximization step. + +Both of these were performed using Adam with a learning rate of 0.02, and with initial parameters drawn from the same distributions. We see in Figure S9C that iLQR-VAE reaches a smaller NLL in considerably fewer iterations than gradient descent, which we hypothesize is due to the good preconditioning given by iLQR (discussed in Figure 1). Note however that the cost of one iLQR-VAE iteration is higher than the direct computation of Equation S49. + +In this section, we have shown in a simple linear-quadratic example how iLQR-VAE performs filtering by inferring the process noise as inputs. While this is undoubtedly an unconventional approach, it becomes particularly valuable in cases where dynamics are non-linear and the noise non-Gaussian. Indeed, in such cases the problem of learning an estimator for the latent state is a very difficult one, typically solved using methods such as particle filtering or unscented Kalman filters (Doucet and Johansen, 2009; Wan et al., 2001). iLQR-VAE offers another way to solve this problem, with close links to the aforementioned approaches. + +# O ANALYSIS OF THE INFERENCE GAP + +In order to evaluate the benefits of defining the recognition model implicitly through the generative parameters, we compared iLQR-VAE to a more standard sequential variational auto-encoder, using a bidirectional recurrent neural network as the recognition model. We generated data from the same system as in Figure 3, in the form of 76 trials of 100 time steps. We used the same generative model in both cases (linear dynamics with $n = 3$ , $m = 3$ , $n_{o} = 10$ , Student prior), such that the only difference lay in the choice of recognition model. We compared the ELBO to a more accurate estimate of the log-likelihood, the Importance Weighted Autoencoder (IwAE) bound (Burda et al., + +![](images/9f7c4bfa5b886a4432f9fddf1cebe8ad7b9d27e6205766653863549382dce774.jpg) +Figure S9: Comparison of the posterior mean of a Kalman smoother and LQR. We ran both LQR and a Kalman smoother on noisy observations generated by a latent system (see text for details). (A) The Kalman smoother (blue) and LQR (dotted red) both inferred the same posterior mean for the latent trajectories, matching the true latents (black) almost perfectly. The posterior uncertainty is shown for both cases on half of the data. The iLQR-VAE uncertainty was obtained by optimizing the variance of the recognition model for 200 iterations, and then drawing 1000 samples from the recognition model. (B) We compared learning the parameters of the posterior distribution using iLQR v.s direct minimization of the NLL. On unseen data, both were able to converge to a solution close to the smoothed output trajectory. Note that we stopped the optimization after 8000 iterations. (C) Learning curves of the direct optimization and iLQR-VAE on the same 168 training trials. Note that we used Adam with the same learning rate of 0.02 in both cases, in order to directly compare the effect of the gradient steps. Both x and y axes use a logarithmic scale. + +![](images/16169f8a02a2eb0ccadabbd8cf15962a69cb2c3f345c9396ee27a7762b596a3e.jpg) + +![](images/0bb58b469ad4a167457a9a930e443cfae9842f150e33247c1e4e84f3194ca43b.jpg) + +2015), which is computed as + +$$ +\mathcal {L} _ {\mathrm {I W A E}} = \mathbb {E} _ {u _ {1},.. u _ {n} \sim q (u | o)} \left[ \log \left(\frac {1}{k} \sum_ {i = 1} ^ {k} \frac {p (o , z _ {i})}{q (u _ {i} | o)}\right) \right] \tag {S52} +$$ + +where we used Monte-Carlo sampling with 1000 samples to evaluate the expectation. This then allowed us to compute the inference gap (Cremer et al., 2018) of both models as $\mathcal{L}_{IWAE} - ELBO$ . As shown in Figure S10, iLQR-VAE has a smaller inference gap throughout training, leading to faster and more robust convergence. This confirms the intuition that keeping the recognition and generative models in sync throughout training reduces the inference gap. + +![](images/24103f89a1c7bf8d567178f5390c885ccbfdcc382109c1e65f4ecf3cc34fd4ac.jpg) + +![](images/4d38bfff9ee8be7a680bd98f17182c50d14133c8fdd9b066eef2020bd0d5b64e.jpg) +Figure S10: Comparison of iLQR with a biRNN recognition model. (A-B) Loss and inference gap as a function of iteration number (starting from iteration 20) for iLQR-VAE (blue) and the biRNN model (pink). (C) Inferred input norm at the end of training for iLQR-VAE and the biRNN model. Ground truth input is shown in dotted black lines. (D) Inferred output at the end of training for iLQR-VAE and the biRNN model. Both models explain the observations (grey dots) well, but iLQR-VAE captures the sharp transitions better. + +![](images/750d17ed7d0bcdbd88d36fa8a3ff5150e56422cf881fd7bdc9c210178e90b429.jpg) + +![](images/4d19d0a7229bc1dc23399972aed39b8883cb80fc33f8e67680fce6c52b8ae03a.jpg) + +# REFERENCES + +Amos, B., Rodriguez, I. D. J., Sacks, J., Boots, B., and Kolter, J. Z. (2018). Differentiable MPC for end-to-end planning and control. arXiv preprint arXiv:1810.13400. + +Aravkin, A., Burke, J. V., Ljung, L., Lozano, A., and Pillonetto, G. (2017). Generalized Kalman smoothing: Modeling and algorithms. Automatica, 86:63-86. +Bhatia, N. P. and Szegö, G. P. (2002). Stability theory of dynamical systems. Springer Science & Business Media. +Boyd, S., Boyd, S. P., and Vandenberghe, L. (2004). Convex optimization. Cambridge university press. +Brunton, B. W., Johnson, L. A., Ojemann, J. G., and Kutz, J. N. (2016a). Extracting spatial-temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition. Journal of neuroscience methods, 258:1-15. +Brunton, S. L., Proctor, J. L., and Kutz, J. N. (2016b). Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the national academy of sciences, 113(15):3932-3937. +Burda, Y., Grosse, R., and Salakhutdinov, R. (2015). Importance weighted autoencoders. arXiv preprint arXiv:1509.00519. +Champion, K., Lusch, B., Kutz, J. N., and Brunton, S. L. (2019). Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences, 116(45):22445-22451. +Chen, R. T., Amos, B., and Nickel, M. (2020). Learning neural event functions for ordinary differential equations. arXiv preprint arXiv:2011.03902. +Chen, R. T., Rubanova, Y., Bettencourt, J., and Duvenaud, D. (2018). Neural ordinary differential equations. arXiv preprint arXiv:1806.07366. +Churchland, M. M., Cunningham, J. P., Kaufman, M. T., Foster, J. D., Nuyujukian, P., Ryu, S. I., and Shenoy, K. V. (2012). Neural population dynamics during reaching. Nature, 487:51-56. +Costa, A. C., Ahamed, T., and Stephens, G. J. (2019). Adaptive, locally linear models of complex dynamics. Proceedings of the National Academy of Sciences, 116(5):1501-1510. +Cremer, C., Li, X., and Duvenaud, D. (2018). Inference suboptimality in variational autoencoders. In International Conference on Machine Learning, pages 1078-1086. PMLR. +Doucet, A. and Johansen, A. M. (2009). A tutorial on particle filtering and smoothing: Fifteen years later. Handbook of nonlinear filtering, 12(656-704):3. +Fieseler, C., Zimmer, M., and Kutz, J. N. (2020). Unsupervised learning of control signals and their encodings in Caenorhabditis elegans whole-brain recordings. Journal of the Royal Society Interface, 17(173):20200459. +Ghahramani, Z. and Hinton, G. E. (1996). Parameter estimation for linear dynamical systems. +Ghahramani, Z. and Hinton, G. E. (2000). Variational learning for switching state-space models. Neural comput., 12:831-864. +Giles, M. (2008). An extended collection of matrix derivative results for forward and reverse mode automatic differentiation. +Heck, J. C. and Salem, F. M. (2017). Simplified minimal gated unit variations for recurrent neural networks. In 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWS-CAS), pages 1593-1596. IEEE. +Hernandez, D., Moretti, A. K., Wei, Z., Saxena, S., Cunningham, J., and Paninski, L. (2018). Nonlinear evolution via spatially-dependent linear dynamics for electrophysiology and calcium data. arXiv preprint arXiv:1811.02459. +Jensen, K. T., Kao, T.-C., Stone, J. T., and Hennequin, G. (2021). Scalable Bayesian GPFA with automatic relevance determination and discrete noise models. Advances in Neural Information Processing Systems, 34. + +Kalman, R. E. (1964). When is a linear control system optimal? +Karush, W. (2014). Minima of Functions of Several Variables with Inequalities as Side Conditions. Springer Basel. +Koppe, G., Toutounji, H., Kirsch, P., Lis, S., and Durstewitz, D. (2019). Identifying nonlinear dynamical systems via generative recurrent neural networks with applications to fmri. *PLoS computational biology*, 15(8):e1007263. +Kuhn, H. W. and Tucker, A. W. (2014). Nonlinear programming. In *Traces and emergence of nonlinear programming*, pages 247-258. Springer. +Kutz, J. N., Brunton, S. L., Brunton, B. W., and Proctor, J. L. (2016). Dynamic mode decomposition: data-driven modeling of complex systems. SIAM. +Li, W. and Todorov, E. (2004). Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), pages 222-229. CiteSeer. +Linderman, S., Johnson, M., Miller, A., Adams, R., Blei, D., and Paninski, L. (2017). Bayesian learning and inference in recurrent switching linear dynamical systems. In Artificial Intelligence and Statistics, pages 914-922. PMLR. +Morrison, M. J., Fieseler, C., and Kutz, J. N. (2020). Nonlinear control in the nematode C. elegans. Frontiers in Computational Neuroscience, 14:123. +O'Doherty, J. E., Cardoso, M. M. B., Makin, J. G., and Sabes, P. N. (2018). Nonhuman Primate Reaching with Multichannel Sensorimotor Cortex Electrophysiology: broadband for indy_20160630_01. This research was supported by the Congressionally Directed Medical Research Program (W81XWH-14-1-0510). JEO was supported by fellowship #2978 from the Paralyzed Veterans of America. JGM was supported by a fellowship from the Swartz Foundation. +Pandarinath, C., O'Shea, D. J., Collins, J., Jozefowicz, R., Stavisky, S. D., Kao, J. C., Trautmann, E. M., Kaufman, M. T., Ryu, S. I., Hochberg, L. R., et al. (2018). Inferring single-trial neural population dynamics using sequential auto-encoders. Nature methods, 15(10):805-815. +Pei, F., Ye, J., Zoltowski, D., Wu, A., Chowdhury, R. H., Sohn, H., O'Doherty, J. E., Shenoy, K. V., Kaufman, M. T., Churchland, M., et al. (2021). Neural latents benchmark'21: Evaluating latent variable models of neural population activity. arXiv preprint arXiv:2109.04463. +Proctor, J. L., Brunton, S. L., and Kutz, J. N. (2016). Dynamic mode decomposition with control. SIAM Journal on Applied Dynamical Systems, 15(1):142-161. +Proctor, J. L., Brunton, S. L., and Kutz, J. N. (2018). Generalizing Koopman theory to allow for inputs and control. SIAM Journal on Applied Dynamical Systems, 17(1):909-930. +Rutten, V., Bernacchia, A., Sahani, M., and Hennequin, G. (2020). Non-reversible Gaussian processes for identifying latent dynamical structure in neural data. Advances in Neural Information Processing Systems. +Schmid, P. J. (2010). Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics, 656:5-28. +Tassa, Y., Mansard, N., and Todorov, E. (2014). Control-limited differential dynamic programming. In 2014 IEEE International Conference on Robotics and Automation (ICRA), pages 1168-1175. IEEE. +Wan, E. A., Van Der Merwe, R., and Haykin, S. (2001). The unscented Kalman filter. *Kalman filtering and neural networks*, 5(2007):221-280. +Yu, Byron, M., Cunningham, J. P., Santhanam, G., Ryu, S. I., Shenoy, K. V., and Sahani, M. (2009). Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. In Advances in neural information processing systems, pages 1881-1888. \ No newline at end of file diff --git a/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/images.zip b/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..359fe34da424fbe9e82168b2ddc1ba1cf0a253eb --- /dev/null +++ b/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7830b1ec77af6ce9b61cf9b0a00e21d4bd8a2d991fe963cde46db0ac183f8c4 +size 1046374 diff --git a/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/layout.json b/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..045331acf9863e8bf1dbfb95d58aea721e2888d3 --- /dev/null +++ b/ilqrvaecontrolbasedlearningofinputdrivendynamicswithapplicationstoneuraldata/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8429f8b9717eea72d99f94744bf829afc6de1ad5986c0ef150f04d2a6fc3d76 +size 998005 diff --git a/languagemodelingviastochasticprocesses/56add9b7-2705-4783-9457-40575714afe7_content_list.json b/languagemodelingviastochasticprocesses/56add9b7-2705-4783-9457-40575714afe7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9bc68ed1a6ee0b5cfd18f009400b7c5b531d55f1 --- /dev/null +++ b/languagemodelingviastochasticprocesses/56add9b7-2705-4783-9457-40575714afe7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:00c905a7c931ec4fefde5bce8fb734120a891eb5189d4e2f838f1ca5e006b284 +size 194395 diff --git a/languagemodelingviastochasticprocesses/56add9b7-2705-4783-9457-40575714afe7_model.json b/languagemodelingviastochasticprocesses/56add9b7-2705-4783-9457-40575714afe7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9e883e101e39a6a91a32cd0f1a333643966f6cc0 --- /dev/null +++ b/languagemodelingviastochasticprocesses/56add9b7-2705-4783-9457-40575714afe7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a53c102fe258f2c408dbf2e469e188e8a5e76e2bf81e776e76384e21e5cd78a0 +size 234285 diff --git a/languagemodelingviastochasticprocesses/56add9b7-2705-4783-9457-40575714afe7_origin.pdf b/languagemodelingviastochasticprocesses/56add9b7-2705-4783-9457-40575714afe7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..15f6ecf01a374170cac84cba157ceed24ac4960c --- /dev/null +++ b/languagemodelingviastochasticprocesses/56add9b7-2705-4783-9457-40575714afe7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a498c1cd2719e6f25021896bbc8a0dd8655aa6ec028af274e15daf3b19ec8c1 +size 2549845 diff --git a/languagemodelingviastochasticprocesses/full.md b/languagemodelingviastochasticprocesses/full.md new file mode 100644 index 0000000000000000000000000000000000000000..9f35ef933e83936620e29fcdcb8f23ed5b2cc07a --- /dev/null +++ b/languagemodelingviastochasticprocesses/full.md @@ -0,0 +1,771 @@ +# LANGUAGE MODELING VIA STOCHASTIC PROCESSES + +Rose E. Wang, Esin Durmus, Noah Goodman, Tatsunori B. Hashimoto +Stanford University + +{rewang, edurmus, ngoodman, thashim}@stanford.edu + +# ABSTRACT + +Modern language models can generate high-quality short texts. However, they often meander or are incoherent when generating longer texts. These issues arise from the next-token-only language modeling objective. To address these issues, we introduce Time Control (TC), a language model that implicitly plans via a latent stochastic process. TC does this by learning a representation which maps the dynamics of how text changes in a document to the dynamics of a stochastic process of interest. Using this representation, the language model can generate text by first implicitly generating a document plan via a stochastic process, and then generating text that is consistent with this latent plan. Compared to domain-specific methods and fine-tuning GPT2 across a variety of text domains, TC improves performance on text infilling and discourse coherence. On long text generation settings, TC preserves the text structure both in terms of ordering (up to $+40\%$ better) and text length consistency (up to $+17\%$ better). Human evaluators also prefer TC's output $28.6\%$ more than the baselines. $^{1}$ + +# 1 INTRODUCTION + +Large language models (LLM) such as GPT-2 have been extremely successful in text generation (Radford et al., 2019; Brown et al., 2020). However, LLMs are known to generate incoherent long texts. One reason is that they are unable to plan ahead or represent long-range text dynamics (Kiddon et al., 2016; Fan et al., 2019; Hua & Wang, 2020; Duboue & McKeown, 2001; Stent et al., 2004; Tamkin et al., 2020). As a result, they oftentimes produce wandering content with poor discourse structure and low relevance (Hua & Wang, 2020; Zhao et al., 2017; Xu et al., 2020); the text reads as if the model has no anchored goal when generating. These problems with coherence are further exacerbated when forcing autoregressive models to generate longer texts as the model struggles to extrapolate beyond its expected text end point. These problems suggest that LLMs currently fail to properly capture how documents evolve from beginning to end. Doing so is critical for succeeding in goal-oriented tasks such as story, dialog or recipe generation. + +Prior work has explored the use of planning-based methods for generating globally coherent text (Kiddon et al., 2016; Fan et al., 2019; Hua & Wang, 2020; Duboue & McKeown, 2001; Stent et al., 2004). However, these methods rely on manually defining text dynamics for specific domains. Other work has attempted to use sentence representations for modeling text, such as with variational auto-encoders (Bowman et al., 2016) or contrastive learning (Gao et al., 2021; Devlin et al., 2019). Their shortcoming in text generation settings is that the latent representations are static: they capture semantic similarity between sentence neighbors, but don't capture how sentence embeddings evolve over a document. Methods including van den Oord et al. (2019) have tried to remedy this by learning a model of local latent dynamics. However, it is difficult to use learned local dynamics for generating accurate goal-conditioned trajectories, especially long-horizon ones. We explore an alternative that explicitly assumes a simple, fixed dynamics model with goal-conditioned generation. + +In this work, we propose Time Control as a way to learn a latent space with known, goal-conditioned dynamics. We begin by assuming that meandering text generated without a goal can be represented as Brownian motion in latent space; this motion enforces the embeddings of neighboring sentences to be similar to each other, whereas those of distant sentences to be dissimilar. Goal-directed behavior can be incorporated into this model by conditioning on a fixed start and end point. In this + +case, the Brownian motion becomes a Brownian bridge and the resulting latent trajectories abide by simple, closed-form dynamics. + +In Time Control, we derive a novel contrastive objective for learning a latent space with Brownian bridge dynamics. We can then use this latent space to generate text that retains local coherence and has improved global coherence. To perform text generation, Time Control first plans a latent trajectory via the Brownian bridge process pinned at a start and end point. It then conditionally generates sentences using this latent plan. In our work, we decode latent plans by fine-tuning GPT2 to generate text conditioned on Time Control's latent trajectory. Trajectories from Time Control act as abstract semantic positions in a document that guide generation of fine-tuned language models. + +In summary, our work's contributions are the following: + +- We derive Time Control, a language model which explicitly models latent structure with Brownian bridge dynamics learned using a novel contrastive objective. +- Across a range of text domains, we show that Time Control generates more or equally coherent text on tasks including text infilling and forced long text generation, compared to task-specific methods. +- We validate that our latent representations capture text dynamics competitively by evaluating discourse coherence with human experiments. +- We ablate our method to understand the importance of the contrastive objective, enforcing Brownian bridge dynamics, and explicitly modeling latent dynamics. + +# 2 RELATED WORKS + +Generating long, coherent text is conceptually difficult for autoregressive models because they lack the ability to model text structure and dynamics (Lin et al., 2021). This means that they struggle to plan and look ahead which leads to generating globally incoherent text. Forcing autoregressive models to generate longer texts exacerbates this incoherence because the models struggle to extrapolate beyond their expected text end point. Prior work has tried to address the problem of generating globally coherent text with planning-based approaches (Puduppully et al., 2019; Moryossef et al., 2019; Fan et al., 2019; Kiddon et al., 2016). However, planning-based approaches rely on domain-specific heuristics for capturing text structure and dynamics. + +Our work uses a contrastive objective to learn latent dynamics in text without domain-specific heuristics. Contrastive objectives have been applied to several domains, including language (Devlin et al., 2019; Iter et al., 2020; Liu & Liu, 2021), vision (Chen et al., 2020), and general time series data (Hyvarinen & Morioka, 2016; Hyvarinen et al., 2019). In particular for language, contrastive objectives have been applied to the next-sentence prediction task for improving BERT embeddings (Devlin et al., 2019) and to the discourse coherence setting (Nie et al., 2019; Chen et al., 2019b) for evaluating how coherent pairs of sentences are. However, these methods have two shortcomings which we address with our work. One is that the resulting sentence embeddings are often static: they capture semantic similarity between sentence neighbors, but don't capture how sentence embeddings evolve over a document. Two is that they are not used for generation and are limited to classification tasks like discourse coherence. Prior work has also tried fitting latent variable models (Bowman et al., 2016), however these generally result in poor language generation (He et al., 2018) or are domain-specific (Weber et al., 2020; Arora et al., 2016). + +Our work is closely related to Contrastive Predictive Coding (CPC) from van den Oord et al. (2019). The key difference is CPC implicitly learns unconditioned latent dynamics, whereas we impose known goal-conditioned dynamics on our latent space. Doing so, we can extrapolate successfully further in time. Additionally, our method builds off of recent findings that contrastive objectives can be used to approximate local transition kernels of stochastic processes (Liu et al., 2021). The main difference between Liu et al. (2021) and our work is that they focus on provable conditions for latent recovery; we focus on empirically effective methods that leverage similar insights for recovering latent representations from language. Finally, our use of stochastic processes draws similarities to diffusion models (Song et al., 2020; Sohl-Dickstein et al., 2015) which apply a chain of diffusion steps onto the data and learn to reverse the diffusion process. However, our application + +![](images/b47320a664bdf9d40eb20f1ae83937216bacabcaa1d73d70dbcbc06ce6c0b4b0.jpg) +Figure 1: Latent space for a positive triplet of sentences $(x_0, x_t, x_T)$ that are part of the same conversation. Time Control maps positive triplets to a smooth Brownian bridge trajectory. It embeds $z_t$ close to the expected embedding $\mu_t$ pinned by $z_0, z_T$ . The green oval area illustrates the uncertainty over $z_t$ as a function of how close $t$ is to 0 and $T$ . In contrast, a negative random sentence $x'$ from a different conversation is not coherent with $x_0$ and $x_T$ ; thus, it is embedded far from $\mu_t$ . This is captured by our contrastive loss, $\mathcal{L}$ . + +x_0: [USER] Hello, I'd like to buy tickets for tomorrow. + +x_t: [ASSISTANT] What movie theater do you prefer? +x_T: [USER] Could you confirm my tickets just in case? + +x'; [USER] Hi, I'm looking to purchase tickets for my family. + +$$ +\mathcal {L} = - \log \frac {\exp (\mathrm {d} (z _ {t} , \mu_ {t}))}{\exp (\mathrm {d} (z _ {t} , \mu_ {t})) + \exp (\mathrm {d} (z ^ {\prime} , \mu_ {t}))} +$$ + +is conceptually different: diffusion processes characterize properties of our latent space and are not a fixed inference method in our work. + +# 3 TIME CONTROL + +The intuition behind Time Control is to learn a latent space with smooth temporal dynamics for modeling and generating coherent text. We detail Time Control in three sections. The first section discusses training the encoder via contrastive learning to map sentences to a Brownian bridge (Revuz & Yor, 2013) latent space. The second section discusses training a decoder to reconstruct sentences from this latent space. The third section discusses generating text from Time Control. + +# 3.1 TRAINING AN ENCODER WITH BROWNIAN BRIDGE DYNAMICS + +Our encoder is a nonlinear mapping from raw input space to latent space, $f_{\theta} : \mathcal{X} \to \mathcal{Z}$ . The objective for the encoder is to map high-dimensional sequential data into low-dimensional latents which follow a stochastic process of interest—in this paper, it is the Brownian bridge process. The density of a Brownian bridge process between an arbitrary start point $z_0$ at $t = 0$ and end point $z_T$ at $t = T$ is, + +$$ +p \left(z _ {t} \mid z _ {0}, z _ {T}\right) = \mathcal {N} \left(\left(1 - \frac {t}{T}\right) z _ {0} + \frac {t}{T} z _ {T}, \frac {t (T - t)}{T}\right). \tag {1} +$$ + +This density is intuitive to understand: It acts like a noisy linear interpolation between the start and end point of the trajectory, where $z_{t}$ should be more like $z_{0}$ at the start and more like $z_{T}$ at the end of the trajectory. Uncertainty is highest in the middle region, and low near the end points (rf. Figure 1). + +Consider a set of triplet observations, $(x_{1},x_{2},x_{3})$ . The goal of our work is to ensure that $f_{\theta}(x_1),f_{\theta}(x_2),f_{\theta}(x_3)$ follow the Brownian bridge transition density in Equation 1. We ensure this using a contrastive objective. Formally, given multiple sequences of data points, $X = \{x_{1},\ldots ,x_{N}\}$ , we draw batches consisting of randomly sampled positive triplets $x_0,x_t,x_T$ where $0 < t < T$ : $\mathcal{B} = \{(x_0,x_t,x_T)\}$ . Our encoder is optimized by, + +$$ +\mathcal {L} _ {N} = \mathbb {E} _ {X} \left[ - \log \frac {\exp \left(\mathrm {d} \left(x _ {0} , x _ {t} , x _ {T} ; f _ {\theta}\right)\right)}{\sum_ {\left(x _ {0} , x _ {t ^ {\prime}} , x _ {T}\right) \in \mathcal {B}} \exp \left(\mathrm {d} \left(x _ {0} , x _ {t ^ {\prime}} , x _ {T} ; f _ {\theta}\right)\right)} \right], \text {w h e r e} \tag {2} +$$ + +$$ +\left. \mathbf {d} \left(x _ {0}, x _ {t}, x _ {T}; f _ {\theta}\right) = - \frac {1}{2 \sigma^ {2}} \left\| \underbrace {f _ {\theta} \left(x _ {t}\right)} _ {z _ {t}} - \underbrace {\left(1 - \frac {t}{T}\right) f _ {\theta} \left(x _ {0}\right) - \frac {t}{T} f _ {\theta} \left(x _ {T}\right)} _ {\text {m e a n i n E q u a t i o n 1}} \right\| _ {2} ^ {2} \right. \tag {3} +$$ + +$\sigma^2$ is the variance in Equation 1: $\frac{t(T - t)}{T}$ . Note that Equation 2 sums over negative middle contrasts, $x_{t'}$ . This objective can be viewed as maximizing the extent to which true triplets from the data follow the Brownian bridge process while minimizing the extent to which an alternative mid-point sampled from another sequence does so. + +Figure 1 illustrates how the objective translates into the language setting for training the encoder. The objective samples triplet sentences from a document. Sentences drawn from the same document make up a smooth latent trajectory; they should be close to each other and follow the conditional density in latent space. Sentences drawn from different documents should not make up a smooth trajectory and should less likely follow bridge dynamics. + +Connection to mutual information estimation and triplet classification We draw connections between our contrastive loss and the mutual information estimation setup from van den Oord et al. (2019); Poole et al. (2019) (as $|\mathcal{B}| \to \infty$ ) and the classification setup from Liu et al. (2021) ( $|\mathcal{B}| = 1$ ). + +Following van den Oord et al. (2019); Poole et al. (2019), this objective can be seen as a lower bound on the mutual information between the two end points and the middle point: $I(X_{t},\{X_{0},X_{T}\}) \geq \log (N) - \mathcal{L}_{N}$ . Hence, by minimizing the contrastive loss, we are maximizing the amount of information between the trajectory and the linear interpolation of its end points. + +Assuming $|\mathcal{B}| = 1$ , we can draw a connection to the classification setup studied in Liu et al. (2021). They train a classifier to distinguish in- vs. out-of-order input pairs and show that the Bayes optimal logits for pair-wise classification can be written as a function of the stochastic process transition kernel. This is equivalent to the our loss on a single triplet $i$ : $l_i = -\log \frac{\exp(\mathrm{d}(x_0, x_t, x_T; f_\theta))}{\exp(\mathrm{d}(x_0, x_t, x_T; f_\theta)) + \exp(\mathrm{d}(x_0, x_{t'}, x_T; f_\theta))}$ . Liu et al. (2021) consider pairs whereas our work considers triplets; we show in Appendix A the pairwise and triplet setups are equivalent. + +# 3.2 TRAINING A DECODER WITH LATENT PLANS + +Here we discuss how to train a language model to decode latent plans for generation. We first map all the sentences in the training dataset to our learned latent space using the pretrained encoder $f_{\theta}$ . This gives us a Brownian bridge trajectory of sentence-level latent codes $(z_0, \dots, z_t, \dots, z_T)$ for a document in the dataset. Then, rather than learning a decoder from scratch, we fine-tune GPT2 (Radford et al., 2019) to generate text conditioned on past context and the latent plan. + +We fine-tune in the following manner. Let $x_{1} \ldots x_{W}$ be a document with $W$ tokens and $T$ sentences used to train the decoder. Using the encoder $f_{\theta}$ , we can obtain embeddings $z_{1} \ldots z_{T}$ for each sentence. The decoder is a standard auto-regressive language model that is modified in the following way: at time $t$ , the decoder must predict $x_{t}$ using all tokens in the past $x_{